perm filename MENTAL.XGP[F75,JMC]2 blob sn#196431 filedate 1976-01-12 generic text, type T, neo UTF8
/LMAR=0/XLINE=3/FONT#0=BASL30/FONT#1=BASI30/FONT#2=BASB30/FONT#3=SUB/FONT#4=SUP
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ ∧H(this draft compiled at 2:24 on January 12, 1976)


␈↓ α∧␈↓␈↓ ∧ASCRIBING MENTAL QUALITIES TO MACHINES

␈↓ α∧␈↓␈↓ αTTo␈αascribe␈αcertain␈α␈↓↓beliefs␈↓,␈α␈↓↓knowledge␈↓,␈α␈↓↓free␈αwill␈↓,␈α␈↓↓intentions␈↓,␈α␈↓↓consciousness␈↓,␈α␈↓↓abilities␈↓␈αor␈α␈↓↓wants␈↓␈αto

␈↓ α∧␈↓a␈α∩machine␈α∩or␈α∩computer␈α∩program␈α∩is␈α∩␈↓αlegitimate␈↓␈α∩whenever␈α∩such␈α∩ascription␈α∩expresses␈α∩the␈α∩same

␈↓ α∧␈↓information␈αabout␈αthe␈αmachine␈αthat␈αit␈αexpresses␈αabout␈αa␈αperson.␈α It␈αis␈α␈↓αuseful␈↓␈αwhen␈αthe␈αascription

␈↓ α∧␈↓helps␈αus␈αunderstand␈αthe␈αstructure␈αof␈αthe␈αmachine,␈αits␈αpast␈αor␈αfuture␈αbehavior,␈αor␈αhow␈αto␈αrepair␈αor

␈↓ α∧␈↓improve␈αit.␈α It␈αis␈αperhaps␈αnever␈α␈↓αlogically␈αrequired␈↓␈αeven␈αfor␈αhumans,␈αbut␈αa␈αpractical␈αtheory␈αof␈αthe

␈↓ α∧␈↓behavior␈α∞of␈α∞machines␈α∞or␈α
humans␈α∞may␈α∞require␈α∞mental␈α
qualities␈α∞or␈α∞qualities␈α∞isomorphic␈α∞to␈α
them.

␈↓ α∧␈↓Theories␈α
of␈α
belief,␈α
knowledge␈α
and␈α
wanting␈α
can␈α
be␈α
constructed␈α
for␈α
machines␈α
in␈α
a␈α∞simpler␈α
setting

␈↓ α∧␈↓than␈α↔for␈α↔humans␈α↔and␈α↔later␈α↔applied␈α⊗to␈α↔humans.␈α↔ Ascription␈α↔of␈α↔mental␈α↔qualities␈α↔is␈α⊗␈↓αmost

␈↓ α∧␈↓αstraightforward␈↓␈α∂for␈α∂machines␈α⊂of␈α∂known␈α∂structure␈α∂such␈α⊂as␈α∂thermostats␈α∂and␈α⊂computer␈α∂operating

␈↓ α∧␈↓systems, but is ␈↓αmost useful␈↓ when applied to machines whose structure is very incompletely known.

␈↓ α∧␈↓␈↓ αTThe␈αabove␈αviews␈αcome␈αfrom␈αwork␈αin␈αartificial␈αintelligence␈↓∧1␈↓␈α(abbreviated␈αAI).␈α They␈αcan␈αbe

␈↓ α∧␈↓generalized␈α
into␈α∞the␈α
assertion␈α
that␈α∞many␈α
of␈α
the␈α∞philosophical␈α
problems␈α
of␈α∞mind␈α
take␈α∞a␈α
practical

␈↓ α∧␈↓form␈α∪as␈α∪soon␈α∪as␈α∪one␈α∪takes␈α∀seriously␈α∪the␈α∪idea␈α∪of␈α∪making␈α∪machines␈α∪behave␈α∀intelligently.␈α∪ In

␈↓ α∧␈↓particular,␈α∪AI␈α∪raises␈α∪for␈α∪machines␈α∪two␈α∪issues␈α∪that␈α∪have␈α∪heretofore␈α∪been␈α∪considered␈α∪only␈α∪in

␈↓ α∧␈↓connection with people.

␈↓ α∧␈↓␈↓ αTFirst,␈α∞in␈α∞designing␈α∞intelligent␈α∞programs␈α∞and␈α∞looking␈α∞at␈α∞them␈α∞from␈α∞the␈α∞outside␈α∞we␈α∞need␈α
to

␈↓ α∧␈↓determine␈α
the␈α
conditions␈α
under␈αwhich␈α
specific␈α
mental␈α
and␈αvolitional␈α
terms␈α
are␈α
applicable.␈α We␈α
can

␈↓ α∧␈↓typify␈αthese␈αproblems␈αby␈αasking␈αwhen␈αit␈αis␈αlegitimate␈αto␈αsay␈αabout␈αa␈αmachine,␈α␈↓↓"␈αIt␈αknows␈αI␈αwant␈αa

␈↓ α∧␈↓↓reservation to Boston, and it can give it to me, but it won't"␈↓.

␈↓ α∧␈↓␈↓ αT Second,␈αwhen␈αwe␈αwant␈αa␈α␈↓αgenerally␈αintelligent␈↓␈↓∧2␈↓␈αcomputer␈αprogram,␈αwe␈αmust␈αbuild␈αinto␈αit␈αa

␈↓ α∧␈↓␈↓αgeneral␈αview␈↓␈αof␈αwhat␈αthe␈αworld␈αis␈αlike␈αwith␈αespecial␈αattention␈αto␈αfacts␈αabout␈αhow␈αthe␈αinformation



␈↓ α∧␈↓␈↓ ε|1␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓required␈αto␈αsolve␈αproblems␈αis␈αto␈αbe␈αobtained␈αand␈αused.␈α Thus␈αwe␈αmust␈αprovide␈αit␈αwith␈αsome␈αkind

␈↓ α∧␈↓of ␈↓↓metaphysics␈↓ (general world-view) and ␈↓↓epistemology␈↓ (theory of knowledge) however naive.

␈↓ α∧␈↓␈↓ αTAs␈αmuch␈αas␈αpossible,␈αwe␈αwill␈αascribe␈αmental␈αqualities␈αseparately␈αfrom␈αeach␈αother␈αinstead␈αof

␈↓ α∧␈↓bundling␈α∞them␈α∞in␈α∞a␈α∞concept␈α∞of␈α∂mind.␈α∞ This␈α∞is␈α∞necessary,␈α∞because␈α∞present␈α∞machines␈α∂have␈α∞rather

␈↓ α∧␈↓varied␈α∂little␈α∂minds;␈α∂the␈α∂mental␈α∂qualities␈α∂that␈α∂can␈α∂legitimately␈α∂be␈α∂ascribed␈α∂to␈α∂them␈α∂are␈α∂few␈α∞and

␈↓ α∧␈↓differ␈αfrom␈αmachine␈αto␈αmachine.␈α
 We␈αwill␈αnot␈αeven␈αtry␈αto␈α
meet␈αobjections␈αlike,␈α␈↓↓"Unless␈αit␈αalso␈α
does

␈↓ α∧␈↓↓X, it is illegitimate to speak of its having mental qualities."␈↓

␈↓ α∧␈↓␈↓ αTMachines␈αas␈αsimple␈αas␈αthermostats␈αcan␈αbe␈αsaid␈αto␈αhave␈αbeliefs,␈αand␈αhaving␈αbeliefs␈αseems␈αto

␈↓ α∧␈↓be␈α⊃a␈α⊃characteristic␈α⊃of␈α⊃most␈α⊃machines␈α⊃capable␈α⊃of␈α⊃problem␈α⊃solving␈α⊃performance.␈α⊃ However,␈α⊂the

␈↓ α∧␈↓machines␈α∩mankind␈α∩has␈α⊃so␈α∩far␈α∩found␈α∩it␈α⊃useful␈α∩to␈α∩construct␈α⊃rarely␈α∩have␈α∩beliefs␈α∩about␈α⊃beliefs.

␈↓ α∧␈↓(Beliefs␈αabout␈αbeliefs␈αwill␈αbe␈αneeded␈αby␈αcomputer␈αprograms␈αto␈αreason␈αabout␈αwhat␈αknowledge␈αthey

␈↓ α∧␈↓lack␈αand␈αwhere␈αto␈αget␈αit).␈α Other␈αmental␈αqualities,␈αsuch␈αas␈αlove␈αand␈αhate,␈αseem␈αpeculiar␈αto␈α
human-

␈↓ α∧␈↓like␈α∩motivational␈α∪structures␈α∩and␈α∪will␈α∩not␈α∪be␈α∩required␈α∪for␈α∩intelligent␈α∪behavior,␈α∩but␈α∪we␈α∩could

␈↓ α∧␈↓probably␈αprogram␈αcomputers␈αto␈αexhibit␈αthem␈αif␈αwe␈αwanted␈αto,␈αbecause␈αour␈αcommon␈αsense␈αnotions

␈↓ α∧␈↓about␈αthem␈α
translate␈αreadily␈α
into␈αcertain␈α
program␈αand␈α
data␈αstructures.␈α
 Still␈αother␈αmental␈α
qualities,

␈↓ α∧␈↓e.g.␈α∂ humor␈α∂and␈α∂appreciation␈α⊂of␈α∂beauty,␈α∂seem␈α∂much␈α∂harder␈α⊂to␈α∂model.␈α∂ While␈α∂we␈α∂will␈α⊂be␈α∂quite

␈↓ α∧␈↓liberal␈α∂in␈α∂ascribing␈α∂␈↓↓some␈↓␈α∂mental␈α∂qualities␈α∂even␈α⊂to␈α∂rather␈α∂primitive␈α∂machines,␈α∂we␈α∂will␈α∂try␈α⊂to␈α∂be

␈↓ α∧␈↓conservative in our criteria for ascribing any ␈↓↓particular␈↓ quality.

␈↓ α∧␈↓␈↓ αTThe␈α
successive␈α
sections␈α
of␈α
this␈α
paper␈α∞will␈α
give␈α
philosophical␈α
and␈α
AI␈α
reasons␈α∞for␈α
ascribing

␈↓ α∧␈↓beliefs␈αto␈αmachines,␈αtwo␈αnew␈αforms␈αof␈αdefinition␈αthat␈αseem␈αnecessary␈αfor␈αdefining␈αmental␈αqualities

␈↓ α∧␈↓and␈αexamples␈αof␈αtheir␈αuse,␈αexamples␈αof␈αsystems␈αto␈αwhich␈αmental␈αqualities␈αare␈αascribed,␈αsome␈αfirst

␈↓ α∧␈↓attempts␈α∩at␈α∩defining␈α⊃a␈α∩variety␈α∩of␈α∩mental␈α⊃qualities,␈α∩some␈α∩criticisms␈α⊃of␈α∩other␈α∩views␈α∩on␈α⊃mental

␈↓ α∧␈↓qualities, notes, and references.




␈↓ α∧␈↓␈↓ ε|2␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ ∧fWHY ASCRIBE MENTAL QUALITIES?

␈↓ α∧␈↓␈↓ αT␈↓αWhy␈α
should␈α
we␈α
want␈α
to␈α
ascribe␈α
beliefs␈αto␈α
machines␈α
at␈α
all?␈↓␈α
This␈α
is␈α
the␈α
opposite␈αquestion␈α
to

␈↓ α∧␈↓that␈αinvolved␈α
in␈αthe␈α
controversy␈αover␈α
␈↓↓reductionism␈↓.␈α Instead␈α
of␈αasking␈α
how␈αmental␈α
qualities␈αcan␈α
be

␈↓ α∧␈↓␈↓αreduced␈↓␈α⊂to␈α⊂physical␈α⊂ones,␈α⊂we␈α⊂ask␈α⊂how␈α⊂to␈α⊂␈↓αascribe␈↓␈α⊂mental␈α⊂qualities␈α⊂to␈α⊂physical␈α⊂systems.␈α⊂ In␈α∂our

␈↓ α∧␈↓opinion,␈α∪this␈α∪question␈α∪is␈α∪more␈α∩natural␈α∪and␈α∪may␈α∪lead␈α∪to␈α∩better␈α∪answers␈α∪to␈α∪the␈α∪questions␈α∩of

␈↓ α∧␈↓reductionism.

␈↓ α∧␈↓␈↓ αTTo␈α∩put␈α⊃the␈α∩issue␈α⊃sharply,␈α∩consider␈α⊃a␈α∩computer␈α⊃program␈α∩for␈α⊃which␈α∩we␈α∩posess␈α⊃complete

␈↓ α∧␈↓listings.␈α
 The␈α
behavior␈α
of␈α
the␈α
program␈α
in␈α
any␈α
environment␈α
is␈α
determined␈α
from␈α
the␈α
structure␈α
of␈α
the

␈↓ α∧␈↓program␈α∂and␈α∞can␈α∂be␈α∂found␈α∞out␈α∂by␈α∞simulating␈α∂the␈α∂action␈α∞of␈α∂the␈α∞program␈α∂and␈α∂the␈α∞environment

␈↓ α∧␈↓without␈α∂having␈α∂to␈α∂deal␈α∂with␈α∂any␈α∂concept␈α∂of␈α∂belief.␈α∂ Nevertheless,␈α∂there␈α∂are␈α∂several␈α∂reasons␈α∞for

␈↓ α∧␈↓ascribing belief and other mental qualities:

␈↓ α∧␈↓␈↓ αT1.␈α
Although␈αwe␈α
may␈αknow␈α
the␈αprogram,␈α
its␈αstate␈α
at␈αa␈α
given␈αmoment␈α
is␈αusually␈α
not␈αdirectly

␈↓ α∧␈↓observable,␈α
and␈α
the␈α
conclusions␈α
we␈α
can␈α
draw␈α
about␈α
its␈α
current␈α
state␈α
may␈α
be␈α
more␈α
readily␈α
expressed

␈↓ α∧␈↓by ascribing certain beliefs or wants than in any other way.

␈↓ α∧␈↓␈↓ αT2.␈α⊃Even␈α⊂if␈α⊃we␈α⊃can␈α⊂simulate␈α⊃the␈α⊃interaction␈α⊂of␈α⊃our␈α⊃program␈α⊂with␈α⊃its␈α⊃environment␈α⊂using

␈↓ α∧␈↓another␈α
more␈α
comprehensive␈α∞program,␈α
the␈α
simulation␈α∞may␈α
be␈α
a␈α
billion␈α∞times␈α
too␈α
slow.␈α∞ We␈α
also

␈↓ α∧␈↓may␈α
not␈αhave␈α
the␈αinitial␈α
conditions␈αof␈α
the␈αenvironment␈α
or␈αits␈α
laws␈αof␈α
motion␈αin␈α
a␈α
suitable␈αform,

␈↓ α∧␈↓whereas␈α∂it␈α∞may␈α∂be␈α∞feasible␈α∂to␈α∞make␈α∂a␈α∞prediction␈α∂of␈α∞the␈α∂effects␈α∞of␈α∂the␈α∞beliefs␈α∂we␈α∞ascribe␈α∂to␈α∞the

␈↓ α∧␈↓program without any computer at all.

␈↓ α∧␈↓␈↓ αT3.␈α∂Ascribing␈α∂beliefs␈α∞may␈α∂allow␈α∂deriving␈α∂general␈α∞statements␈α∂aobut␈α∂the␈α∂program's␈α∞behavior

␈↓ α∧␈↓that could not be obtained from any finite number of simulations.

␈↓ α∧␈↓␈↓ αT4.␈α
 The␈α
belief␈α
and␈α
goal␈α
structures␈α
we␈α∞ascribe␈α
to␈α
the␈α
program␈α
may␈α
be␈α
easier␈α∞to␈α
understand

␈↓ α∧␈↓than the details of program as expressed in its listing.




␈↓ α∧␈↓␈↓ ε|3␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ αT5.␈α∂The␈α∂belief␈α∞and␈α∂goal␈α∂structure␈α∞is␈α∂likely␈α∂to␈α∂be␈α∞close␈α∂to␈α∂the␈α∞structure␈α∂the␈α∂designer␈α∂of␈α∞the

␈↓ α∧␈↓program␈αhad␈αin␈αmind,␈αand␈αit␈αmay␈αbe␈αeasier␈αto␈αdebug␈αthe␈αprogram␈αin␈αterms␈αof␈αthis␈αstructure␈αthan

␈↓ α∧␈↓directly␈αfrom␈αthe␈αlisting.␈α
 In␈αfact,␈αit␈αis␈αoften␈α
possible␈αfor␈αsomeone␈αto␈α
correct␈αa␈αfault␈αby␈αreasoning␈α
in

␈↓ α∧␈↓general␈α∞terms␈α∂about␈α∞the␈α∂information␈α∞in␈α∞a␈α∂program␈α∞or␈α∂machine,␈α∞diagnosing␈α∞what␈α∂is␈α∞wrong␈α∂as␈α∞a

␈↓ α∧␈↓false␈α
belief,␈α
and␈α
looking␈α∞at␈α
the␈α
details␈α
of␈α
the␈α∞program␈α
or␈α
machine␈α
only␈α
sufficiently␈α∞to␈α
determine

␈↓ α∧␈↓how the false belief is represented and what mechanism caused it to arise.␈↓∧3␈↓

␈↓ α∧␈↓␈↓ αTAll␈α∩the␈α⊃above␈α∩reasons␈α⊃for␈α∩ascribing␈α⊃beliefs␈α∩are␈α⊃epistemological.␈α∩ i.e.␈α⊃ascribing␈α∩beliefs␈α⊃is

␈↓ α∧␈↓needed␈α⊂to␈α⊂adapt␈α∂to␈α⊂limitations␈α⊂on␈α∂our␈α⊂ability␈α⊂to␈α∂acquire␈α⊂knowledge,␈α⊂use␈α∂it␈α⊂for␈α⊂prediction,␈α∂and

␈↓ α∧␈↓establish␈αgeneralizations␈αin␈αterms␈αof␈αthe␈αelementary␈αstructure␈αof␈αthe␈αprogram.␈α Perhaps␈αthis␈αis␈αthe

␈↓ α∧␈↓general reason for ascribing higher levels of organization to systems.

␈↓ α∧␈↓␈↓ αTComputers␈αgive␈αrise␈αto␈αnumerous␈αexamples␈αof␈αbuilding␈αa␈αhigher␈αstructure␈αon␈αthe␈αbasis␈αof␈αa

␈↓ α∧␈↓lower␈α
and␈α
conducting␈α
subsequent␈α
analyses␈α
using␈α
the␈α
higher␈α
structure.␈α
 The␈α
geometry␈α
of␈αthe␈α
electric

␈↓ α∧␈↓fields␈α
in␈αa␈α
transistor␈α
and␈αits␈α
chemical␈α
composition␈αgive␈α
rise␈αto␈α
its␈α
properties␈αas␈α
an␈α
electric␈αcircuit

␈↓ α∧␈↓element.␈α
 Transistors␈α
are␈αcombined␈α
in␈α
small␈αcircuits␈α
and␈α
powered␈α
in␈αstandard␈α
ways␈α
to␈αmake␈α
logical

␈↓ α∧␈↓elements␈α
such␈α∞as␈α
ANDs,␈α∞ORs,␈α
NOTs␈α∞and␈α
flip-flops.␈α∞ Computers␈α
are␈α∞designed␈α
with␈α∞these␈α
logical

␈↓ α∧␈↓elements␈αto␈αobey␈αa␈αdesired␈αorder␈αcode;␈αthe␈αdesigner␈αusually␈αneedn't␈αconsider␈αthe␈αproperties␈α
of␈αthe

␈↓ α∧␈↓transistors␈αas␈αcircuit␈α
elements.␈α The␈αdesigner␈α
of␈αa␈αhigher␈α
level␈αlanguage␈αworks␈α
with␈αthe␈αorder␈α
code

␈↓ α∧␈↓and␈αdoesn't␈αhave␈αto␈αknow␈αabut␈αthe␈αlogic;␈αthe␈αuser␈αof␈αthe␈αhigher␈αorder␈αlanguage␈αneedn't␈αknow␈αthe

␈↓ α∧␈↓computer's order code.

␈↓ α∧␈↓␈↓ αTIn␈αthe␈αabove␈αcases,␈α
users␈αof␈αthe␈αhigher␈αlevel␈α
can␈αcompletely␈αignore␈αthe␈αlower␈α
level,␈αbecause

␈↓ α∧␈↓the␈αbehavior␈αof␈αthe␈αhigher␈α
level␈αsystem␈αis␈αcompletely␈αdetermined␈α
by␈αthe␈αvalues␈αof␈αthe␈αhigher␈α
level

␈↓ α∧␈↓variables;␈αe.g.␈α in␈αorder␈αto␈αdetermine␈αthe␈αoutcome␈αof␈αa␈αcomputer␈αprogram,␈αone␈αshould␈αnot␈αhave␈αto

␈↓ α∧␈↓look␈αat␈αthe␈αstates␈αof␈αflip-flops.␈α However,␈αwhen␈αwe␈αascribe␈αmental␈αstructure␈αto␈αhumans␈αor␈αgoals␈αto




␈↓ α∧␈↓␈↓ ε|4␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓society,␈α∪we␈α∪always␈α∪get␈α∪highly␈α∪incomplete␈α∩systems;␈α∪the␈α∪higher␈α∪level␈α∪behavior␈α∪cannot␈α∪be␈α∩fully

␈↓ α∧␈↓predicted␈αfrom␈αhigher␈αlevel␈αobservations␈αand␈αhigher␈αlevel␈α"laws"␈αeven␈αwhen␈αthe␈αunderlying␈αlower

␈↓ α∧␈↓level behavior is determinate.

␈↓ α∧␈↓␈↓ αTBesides␈α
the␈α
above␈αphilosophical␈α
reasons␈α
for␈αascribing␈α
mental␈α
qualities␈αto␈α
machines,␈α
I␈αshall

␈↓ α∧␈↓argue␈α⊂that␈α⊂in␈α⊂order␈α⊂to␈α⊂make␈α⊂machines␈α∂behave␈α⊂intelligently,␈α⊂we␈α⊂will␈α⊂have␈α⊂to␈α⊂program␈α⊂them␈α∂to

␈↓ α∧␈↓ascribe beliefs etc. to each other and to people.

␈↓ α∧␈↓␈↓ αT→→→→Here there will be more on machine's models of each others minds.←←←←




































␈↓ α∧␈↓␈↓ ε|5␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ ¬εTWO METHODS OF DEFINITION
␈↓ α∧␈↓␈↓ βnAND THEIR APPLICATION TO MENTAL QUALITIES

␈↓ α∧␈↓␈↓ αTIn␈α∂our␈α⊂opinion,␈α∂a␈α∂major␈α⊂source␈α∂of␈α⊂problems␈α∂in␈α∂defining␈α⊂mental␈α∂and␈α⊂other␈α∂philosophical

␈↓ α∧␈↓concepts␈α∂is␈α∞the␈α∂weakness␈α∂of␈α∞the␈α∂methods␈α∞of␈α∂definition␈α∂that␈α∞have␈α∂been␈α∞␈↓↓explicitly␈↓␈α∂used.␈α∂ We␈α∞will

␈↓ α∧␈↓consider␈α∂two␈α∞kinds␈α∂of␈α∂definition:␈α∞␈↓↓second␈α∂order␈α∞structural␈α∂definition␈↓␈α∂and␈α∞␈↓↓definition␈α∂relative␈α∂to␈α∞an

␈↓ α∧␈↓↓approximate theory␈↓ and their application to defining mental qualities.

␈↓ α∧␈↓␈↓ αT1. ␈↓αSecond Order Structural Definition.␈↓

␈↓ α∧␈↓␈↓ αTStructural␈α∩definitions␈α∩of␈α∩qualities␈α∩are␈α∩given␈α∩in␈α∩terms␈α∩of␈α∩the␈α∩state␈α∩of␈α∩the␈α∪system␈α∩being

␈↓ α∧␈↓described while behavioral definitions are given in terms of actual or potential behavior␈↓∧4␈↓.

␈↓ α∧␈↓␈↓ αTIn␈α∪the␈α∩case␈α∪of␈α∪a␈α∩specific␈α∪known␈α∩machine,␈α∪one␈α∪can␈α∩often␈α∪give␈α∩a␈α∪␈↓↓first␈α∪order␈α∩structural

␈↓ α∧␈↓↓definition␈↓.␈α Thus␈αwe␈αmight␈αgive␈αa␈αpredicate␈α␈↓↓B(s,p)␈↓␈αsuch␈αthat␈αif␈αthe␈αmachine␈αis␈αin␈αstate␈α␈↓↓s␈↓,␈αit␈αis␈αsaid

␈↓ α∧␈↓to␈α⊂believe␈α⊃the␈α⊂sentence␈α⊂␈↓↓p␈↓␈α⊃provided␈α⊂␈↓↓B(s,p)␈↓␈α⊃is␈α⊂true.␈α⊂ However,␈α⊃I␈α⊂don't␈α⊂think␈α⊃there␈α⊂is␈α⊃a␈α⊂general

␈↓ α∧␈↓definition of belief having this form that applies to all machines in all environments.

␈↓ α∧␈↓␈↓ αTTherefore␈α∞we␈α∞give␈α∂a␈α∞␈↓↓second␈α∞order␈α∂predicate␈↓␈α∞β␈↓↓(B,W)␈↓␈α∞that␈α∞tells␈α∂whether␈α∞we␈α∞regard␈α∂the␈α∞first

␈↓ α∧␈↓order␈α∞predicate␈α
␈↓↓B(s,p)␈↓␈α∞as␈α
a␈α∞"good"␈α
notion␈α∞of␈α
belief␈α∞in␈α
the␈α∞␈↓↓world␈α
W␈↓.␈α∞ Such␈α
a␈α∞predicate␈α
β␈α∞will␈α
be

␈↓ α∧␈↓called␈α⊂a␈α⊂␈↓↓second␈α⊂order␈α⊂definition␈↓;␈α⊂it␈α⊂gives␈α⊂criteria␈α∂for␈α⊂criticizing␈α⊂an␈α⊂ascription␈α⊂of␈α⊂a␈α⊂quality␈α⊂to␈α∂a

␈↓ α∧␈↓system.␈α Axiomatizations␈αof␈αbelief␈αgive␈αrise␈α
to␈αsecond␈αorder␈αdefinitions,␈αand␈αwe␈αsuggest␈α
that␈αboth

␈↓ α∧␈↓our␈α
common␈α
sense␈α
and␈α
scientific␈α
usage␈α
of␈α
not-directly-observable␈α
qualities␈α
corresponds␈α
more␈α
closely

␈↓ α∧␈↓to␈α
second␈α
order␈α
structural␈α
definition␈α
than␈α
to␈αany␈α
kind␈α
of␈α
behavioral␈α
definition.␈α
 It␈α
should␈αbe␈α
noted

␈↓ α∧␈↓that␈αa␈αsecond␈αorder␈αdefinition␈αcannot␈αguarantee␈α
that␈αthere␈αexist␈αpredicates␈α␈↓↓B␈↓␈αmeeting␈αthe␈α
criterion

␈↓ α∧␈↓β␈αor␈αthat␈αsuch␈αa␈α␈↓↓B␈↓␈αis␈αunique.␈α It␈αcan␈αalso␈α
turn␈αout␈αthat␈αa␈αquality␈αis␈αbest␈αdefined␈αas␈αa␈αmember␈αof␈α
a

␈↓ α∧␈↓group of related qualities.

␈↓ α∧␈↓␈↓ αTThe␈αsecond␈αorder␈αdefinition␈αcriticizes␈αwhole␈αbelief␈αstructures␈αrather␈αthan␈αindividual␈αbeliefs.

␈↓ α∧␈↓We␈α∩can␈α∪treat␈α∩individual␈α∪beliefs␈α∩by␈α∪saying␈α∩that␈α∩a␈α∪system␈α∩believes␈α∪␈↓↓p␈↓␈α∩in␈α∪state␈α∩␈↓↓s␈↓␈α∪provided␈α∩all



␈↓ α∧␈↓␈↓ ε|6␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓"reasonably␈α∃good"␈α∃␈↓↓B␈↓'s␈α∀satisfy␈α∃␈↓↓B(s,p)␈↓.␈α∃ Thus␈α∀we␈α∃are␈α∃distinguishing␈α∀the␈α∃"intersection"␈α∃of␈α∀the

␈↓ α∧␈↓reasonably good ␈↓↓B␈↓'s.

␈↓ α∧␈↓␈↓ αTIt␈αseems␈αto␈αme␈αthat␈αthere␈αshould␈αbe␈αa␈αmetatheorem␈αof␈αmathematical␈αlogic␈αasserting␈αthat␈αnot

␈↓ α∧␈↓all␈α∪second␈α∪order␈α∪definitions␈α∪can␈α∪be␈α∩reduced␈α∪to␈α∪first␈α∪order␈α∪definitions␈α∪and␈α∪further␈α∩theorems

␈↓ α∧␈↓characterizing␈αthose␈α
second␈αorder␈α
definitions␈αthat␈αadmit␈α
such␈αreductions.␈α
 Such␈αtechnical␈αresults,␈α
if

␈↓ α∧␈↓they␈α⊂can␈α∂be␈α⊂found,␈α∂may␈α⊂be␈α∂helpful␈α⊂in␈α∂philosophy␈α⊂and␈α∂in␈α⊂the␈α∂construction␈α⊂of␈α⊂formal␈α∂scientific

␈↓ α∧␈↓theories.␈α⊃ I␈α⊃would␈α⊃conjecture␈α⊃that␈α⊃many␈α⊃of␈α⊃the␈α⊃informal␈α⊃philosophical␈α⊃arguments␈α⊃that␈α⊂certain

␈↓ α∧␈↓mental␈αconcepts␈αcannot␈αbe␈αreduced␈αto␈αphysics␈αwill␈αturn␈αout␈αto␈αbe␈αsketches␈αof␈αarguments␈αthat␈αthese

␈↓ α∧␈↓concepts require second (or higher) order definitions.

␈↓ α∧␈↓␈↓ αTHere␈α∞is␈α
a␈α∞deliberately␈α
imprecise␈α∞second␈α
order␈α∞definition␈α
of␈α∞belief.␈α
 For␈α∞each␈α
state␈α∞␈↓↓s␈↓␈α∞of␈α
the

␈↓ α∧␈↓machine␈αand␈α
each␈αsentence␈α
␈↓↓p␈↓␈αin␈α
a␈αsuitable␈αlanguage␈α
␈↓↓L␈↓,␈αa␈α
belief␈αpredicate␈α
␈↓↓B(s,p)␈↓␈αassigns␈α
truth␈αor

␈↓ α∧␈↓falsity␈α∞according␈α∞to␈α∂whether␈α∞the␈α∞machine␈α∂is␈α∞considered␈α∞to␈α∞believe␈α∂␈↓↓p␈↓␈α∞when␈α∞it␈α∂is␈α∞in␈α∞state␈α∂␈↓↓s␈↓.␈α∞ The

␈↓ α∧␈↓language␈α␈↓↓L␈↓␈αis␈αchosen␈αfor␈αour␈αconvenience,␈αand␈αthere␈αis␈αno␈αassumption␈αthat␈αthe␈αmachine␈αexplicitly

␈↓ α∧␈↓represents␈α⊂sentences␈α⊃of␈α⊂␈↓↓L␈↓␈α⊃in␈α⊂any␈α⊃way.␈α⊂ Thus␈α⊂we␈α⊃can␈α⊂talk␈α⊃about␈α⊂the␈α⊃beliefs␈α⊂of␈α⊃Chinese,␈α⊂dogs,

␈↓ α∧␈↓thermostats,␈α∩and␈α∩computer␈α∩operating␈α⊃systems␈α∩without␈α∩assuming␈α∩that␈α⊃they␈α∩use␈α∩English␈α∩or␈α⊃our

␈↓ α∧␈↓favorite first order language.

␈↓ α∧␈↓␈↓ αTWe␈α∞now␈α∞subject␈α∞␈↓↓B(s,p)␈↓␈α∞to␈α∞the␈α∞certain␈α∞criteria;␈α∞i.e.␈α∞β␈↓↓(B,W)␈↓␈α∞is␈α∞considered␈α∞true␈α∞provided␈α
the

␈↓ α∧␈↓following conditions are satisfied:

␈↓ α∧␈↓␈↓ β$1.1.␈α⊂The␈α⊂set␈α⊃␈↓↓Bel(s)␈↓␈α⊂of␈α⊂beliefs,␈α⊃i.e.␈α⊂the␈α⊂set␈α⊃of␈α⊂␈↓↓p␈↓'s␈α⊂for␈α⊃which␈α⊂␈↓↓B(s,p)␈↓␈α⊂is␈α⊃assigned␈α⊂true

␈↓ α∧␈↓contains sufficiently "obvious" consequences of some of its members.

␈↓ α∧␈↓␈↓ β$1.2.␈α ␈↓↓Bel(s)␈↓␈αchanges␈αin␈αa␈αreasonable␈αway␈αwhen␈αthe␈αstate␈αchanges.␈α We␈αlike␈αnew␈αbeliefs

␈↓ α∧␈↓to␈α∂be␈α∂logical␈α∂or␈α∂"plausible"␈α∂consequences␈α∂of␈α∂old␈α∞ones␈α∂or␈α∂to␈α∂come␈α∂in␈α∂as␈α∂␈↓↓communications␈↓␈α∂in␈α∞some

␈↓ α∧␈↓language␈α∃on␈α∀the␈α∃input␈α∀lines␈α∃or␈α∃to␈α∀be␈α∃␈↓↓observations␈↓,␈α∀i.e.␈α∃ beliefs␈α∀about␈α∃the␈α∃environment␈α∀the




␈↓ α∧␈↓␈↓ ε|7␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓information␈α∂for␈α∂which␈α⊂comes␈α∂in␈α∂on␈α∂the␈α⊂input␈α∂lines.␈α∂ The␈α∂set␈α⊂of␈α∂beliefs␈α∂should␈α∂not␈α⊂change␈α∂too

␈↓ α∧␈↓rapidly as the state changes.

␈↓ α∧␈↓␈↓ β$1.3.␈α∀ We␈α∀prefer␈α∀the␈α∀set␈α∀of␈α∀beliefs␈α∀to␈α∀be␈α∀as␈α∀consistent␈α∀as␈α∀possible.␈α∀ (Admittedly,

␈↓ α∧␈↓consistency␈α
is␈α
not␈α
a␈α
quantitative␈α
concept␈α
in␈α
mathematical␈α
logic␈α
-␈α
a␈α
system␈α
is␈α
either␈α
consistent␈α
or␈α
not,

␈↓ α∧␈↓but␈α
it␈αwould␈α
seem␈αthat␈α
we␈αwill␈α
sometimes␈αhave␈α
to␈αascribe␈α
inconsistent␈αsets␈α
of␈αbeliefs␈α
to␈αmachines

␈↓ α∧␈↓and␈α∞people.␈α∞ Our␈α∞intuition␈α
says␈α∞that␈α∞we␈α∞should␈α
be␈α∞able␈α∞to␈α∞maintain␈α
areas␈α∞of␈α∞consistency␈α∞in␈α
our

␈↓ α∧␈↓beliefs␈α∞and␈α∞that␈α∞it␈α∞may␈α∞be␈α∞especially␈α∞important␈α∞to␈α∞avoid␈α∞inconsistencies␈α∞in␈α∞the␈α∞machine's␈α∞purely

␈↓ α∧␈↓analytic beliefs).

␈↓ α∧␈↓␈↓ β$1.4.␈α∞ Our␈α∞criteria␈α∂for␈α∞belief␈α∞systems␈α∂can␈α∞be␈α∞strengthened␈α∞if␈α∂we␈α∞identify␈α∞some␈α∂of␈α∞the

␈↓ α∧␈↓machine's␈α
beliefs␈αas␈α
expressing␈αgoals,␈α
i.e.␈αif␈α
we␈αhave␈α
beliefs␈αof␈α
the␈αform␈α
"It␈αwould␈α
be␈αgood␈α
if␈α...".

␈↓ α∧␈↓Then␈α
we␈α
can␈α
ask␈α
that␈α
the␈α
machine's␈α∞behavior␈α
be␈α
somewhat␈α
␈↓↓rational␈↓,␈α
i.e.␈α
 ␈↓↓it␈α
does␈α
what␈α∞it␈α
believes

␈↓ α∧␈↓↓will␈αachieve␈αits␈αgoals␈↓.␈αThe␈αmore␈αof␈αits␈αbehavior␈αwe␈αcan␈αaccount␈αfor␈αin␈αthis␈αway,␈αthe␈αbetter␈αwe␈αwill

␈↓ α∧␈↓like␈α␈↓↓B(s,p)␈↓.␈α We␈α
also␈αwould␈αlike␈α
to␈αaccount␈αfor␈α
internal␈αstate␈αchanges␈α
as␈αchanges␈αin␈α
belief␈αin␈αso␈α
far

␈↓ α∧␈↓as this is reasonable.

␈↓ α∧␈↓␈↓ β$1.5.␈α
 If␈α
the␈α
machine␈α
communicates,␈α
i.e.␈α
emits␈α
sentences␈α
in␈α
some␈α
language␈α
that␈α
can␈α
be

␈↓ α∧␈↓interpreted␈α∞as␈α∂assertions,␈α∞questions␈α∞and␈α∂commands,␈α∞we␈α∞will␈α∂want␈α∞the␈α∞assertions␈α∂to␈α∞be␈α∂among␈α∞its

␈↓ α∧␈↓beliefs␈α∪unless␈α∪we␈α∪are␈α∪ascribing␈α∪to␈α∪it␈α∪a␈α∩goal␈α∪or␈α∪subgoal␈α∪that␈α∪involves␈α∪lying.␈α∪ In␈α∪general,␈α∩its

␈↓ α∧␈↓communications should be such as it believes will achieve its goals.

␈↓ α∧␈↓␈↓ β$1.6.␈α Sometimes␈αwe␈αshall␈αwant␈αto␈αascribe␈αintrospective␈αbeliefs,␈αe.g.␈αa␈αbelief␈αthat␈αit␈αdoes

␈↓ α∧␈↓not know how to fly to Boston or even that it doesn't know what it wants in a certain situation.

␈↓ α∧␈↓␈↓ β$1.7.␈α
Finally,␈α∞we␈α
will␈α
prefer␈α∞a␈α
more␈α
economical␈α∞ascription␈α
␈↓↓B␈↓␈α
to␈α∞a␈α
less␈α∞economical␈α
one.

␈↓ α∧␈↓The␈αfewer␈αbeliefs␈αwe␈αascribe␈αand␈αthe␈αless␈α
they␈αchange␈αwith␈αstate␈αconsistent␈αwith␈αaccounting␈αfor␈α
he

␈↓ α∧␈↓behavior and the internal state changes, the better we will like it.




␈↓ α∧␈↓␈↓ ε|8␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ αTThe␈α
above␈α
criteria␈α
have␈α
been␈α
formulated␈α
somewhat␈α
vaguely.␈α
 This␈α
would␈α
be␈α
bad␈α∞if␈α
there

␈↓ α∧␈↓were␈αwidely␈α
different␈αascriptions␈α
of␈αbeliefs␈α
to␈αa␈αparticular␈α
machine␈αthat␈α
all␈αmet␈α
our␈αcriteria␈α
or␈αif

␈↓ α∧␈↓the␈α∞criteria␈α∞allowed␈α∞ascriptions␈α∞that␈α∞differed␈α∞widely␈α∞from␈α∞our␈α∞intuitions.␈α∞ My␈α∞present␈α∞opinion␈α
is

␈↓ α∧␈↓that␈α∞more␈α∞thought␈α
will␈α∞make␈α∞the␈α
criteria␈α∞somewhat␈α∞more␈α
precise␈α∞at␈α∞no␈α
cost␈α∞in␈α∞applicability,␈α
but

␈↓ α∧␈↓that␈αthey␈α␈↓↓should␈↓␈αstill␈αremain␈αrather␈αvague,␈αi.e.␈αwe␈αshall␈αwant␈αto␈αascribe␈αbelief␈αin␈αa␈α␈↓↓family␈↓␈αof␈αcases.

␈↓ α∧␈↓However,␈α⊃even␈α⊃at␈α⊃the␈α⊃present␈α⊃level␈α⊃of␈α⊃vagueness,␈α⊃there␈α⊃probably␈α⊃won't␈α⊃be␈α∩radically␈α⊃different

␈↓ α∧␈↓equally␈α
"good"␈αascriptions␈α
of␈αbelief␈α
for␈α
systems␈αof␈α
practical␈αinterest.␈α
 If␈α
there␈αwere,␈α
we␈αwould␈α
notice

␈↓ α∧␈↓the ambiguity in ordinary language.

␈↓ α∧␈↓␈↓ αTOf␈αcourse␈αwe␈αwill␈αneed␈αprecise␈αaxiomatizations␈αof␈αbelief␈αand␈αother␈αmental␈αqualities␈αto␈αbuild

␈↓ α∧␈↓into particular intelligent computer programs.

␈↓ α∧␈↓␈↓ αT2. ␈↓αDefinitions relative to an approximate theory␈↓.

␈↓ α∧␈↓␈↓ αTCertain␈αconcepts,␈αe.g.␈α␈↓↓X␈αcan␈αdo␈αY␈↓,␈αare␈αmeaningful␈αonly␈αin␈αconnection␈αwith␈αa␈αrather␈αcomplex

␈↓ α∧␈↓theory.␈α⊃ For␈α⊃example,␈α⊃suppose␈α⊃we␈α⊃denote␈α⊃the␈α∩state␈α⊃of␈α⊃the␈α⊃world␈α⊃by␈α⊃␈↓↓s␈↓,␈α⊃and␈α⊃suppose␈α∩we␈α⊃have

␈↓ α∧␈↓functions␈α⊃␈↓↓f␈↓β1␈↓↓(s)␈↓,...,␈↓↓f␈↓βn␈↓↓(s)␈↓␈α⊃that␈α⊃are␈α⊃directly␈α⊃or␈α⊃indirectly␈α⊃observable.␈α⊃ Suppose␈α⊃further␈α⊃that␈α⊃␈↓↓F(s)␈↓␈α⊃is

␈↓ α∧␈↓another function of the world-state but that we can approximate it by

␈↓ α∧␈↓␈↓ αT␈↓↓F"(s) = F'(f␈↓β1␈↓↓(s),...,f␈↓βn␈↓↓(s))␈↓.

␈↓ α∧␈↓␈↓ αTNow␈αconsider␈αthe␈α
counterfactual␈αconditional␈αsentence,␈α"If␈α
␈↓↓f␈↓β2␈↓↓(s)␈↓␈αwere␈α4,␈α
then␈α␈↓↓F(s)␈↓␈αwould␈αbe␈α
3

␈↓ α∧␈↓-␈α∞calling␈α∞the␈α∞present␈α∞state␈α∞of␈α∞the␈α∞world␈α∂␈↓↓s␈↓β0␈↓."␈α∞By␈α∞itself,␈α∞this␈α∞sentence␈α∞has␈α∞no␈α∞meaning,␈α∂because␈α∞no

␈↓ α∧␈↓definite␈α∂state␈α∂␈↓↓s␈↓␈α∂of␈α∂the␈α∂world␈α∂is␈α⊂specified␈α∂by␈α∂the␈α∂condition.␈α∂ However,␈α∂in␈α∂the␈α∂framework␈α⊂of␈α∂the

␈↓ α∧␈↓functions␈α⊃␈↓↓f␈↓β1␈↓↓(s),...,f␈↓βn␈↓↓(s)␈↓␈α⊃and␈α⊃the␈α⊃given␈α⊃approximation␈α⊃to␈α⊃␈↓↓F(s)␈↓,␈α⊃the␈α⊃assertion␈α⊃can␈α⊃be␈α⊃verified␈α⊃by

␈↓ α∧␈↓computing␈α␈↓↓F'␈↓␈αwith␈αall␈αarguments␈αexcept␈αthe␈αsecond␈αhaving␈αthe␈αvalues␈αassociated␈αwith␈αthe␈αstate␈α␈↓↓s␈↓β0␈↓

␈↓ α∧␈↓of the world.

␈↓ α∧␈↓␈↓ αTThis gives rise to some remarks:




␈↓ α∧␈↓␈↓ ε|9␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ β$2.1.␈α∩The␈α∩most␈α∩straightforward␈α∩case␈α∩of␈α∩counterfactuals␈α∩arises␈α∩when␈α∩the␈α∩state␈α∩of␈α∩a

␈↓ α∧␈↓phenomenon␈αhas␈αa␈αdistinguished␈αCartesian␈αproduct␈αstructure.␈α Then␈αthe␈αmeaning␈αof␈αa␈αchange␈αof

␈↓ α∧␈↓one␈αcomponent␈αwithout␈αchanging␈αthe␈αothers␈αis␈αquite␈αclear.␈α Changes␈αof␈αmore␈αthan␈αone␈αcomponent

␈↓ α∧␈↓also␈α∩have␈α∩definite␈α∩meanings.␈α∩ This␈α∩is␈α∩a␈α∩stronger␈α∩structure␈α∩than␈α∩the␈α∩␈↓↓possible␈α∩worlds␈↓␈α⊃structure

␈↓ α∧␈↓discussed in (Lewis 1973).

␈↓ α∧␈↓␈↓ β$2.2.␈αThe␈αusual␈αcase␈αis␈αone␈αin␈αwhich␈αthe␈αstate␈α␈↓↓s␈↓␈αis␈αa␈αsubstantially␈αunknown␈αentity␈αand

␈↓ α∧␈↓the␈α
form␈αof␈α
the␈α
function␈α␈↓↓F␈↓␈α
is␈αalso␈α
unknown,␈α
but␈αthe␈α
values␈αof␈α
␈↓↓f␈↓β1␈↓↓(s),...,f␈↓βn␈↓↓(s)␈↓␈α
and␈αthe␈α
function␈α␈↓↓F'␈↓␈α
are

␈↓ α∧␈↓much␈α
better␈α
known.␈α
 Suppose␈α
further␈αthat␈α
␈↓↓F"(s)␈↓␈α
is␈α
known␈α
to␈αbe␈α
only␈α
a␈α
fair␈α
approximation␈αto␈α
␈↓↓F(s)␈↓.

␈↓ α∧␈↓We␈αnow␈αhave␈αa␈αsituation␈αin␈αwhich␈αthe␈αcounterfactual␈αconditional␈αstatement␈αis␈αmeaningful␈αas␈αlong

␈↓ α∧␈↓as␈α
it␈αis␈α
not␈α
examined␈αtoo␈α
closely,␈αi.e.␈α
as␈α
long␈αas␈α
we␈α
are␈αthinking␈α
of␈αthe␈α
world␈α
in␈αterms␈α
of␈αthe␈α
values

␈↓ α∧␈↓of␈α∞␈↓↓f␈↓β1␈↓↓,...,f␈↓βn␈↓,␈α∂but␈α∞when␈α∞we␈α∂go␈α∞beyond␈α∞the␈α∂approximate␈α∞theory,␈α∞the␈α∂whole␈α∞meaning␈α∞of␈α∂the␈α∞sentence

␈↓ α∧␈↓seems to disintegrate.

␈↓ α∧␈↓␈↓ αTOur␈α
idea␈α
is␈α
that␈α
this␈α
is␈α
a␈α
very␈α
common␈α
phenomenon.␈α
In␈α
particular␈α
it␈α
applies␈α
to␈α
statements␈α
of

␈↓ α∧␈↓the␈αform␈α␈↓↓"X␈αcan␈αdo␈αY"␈↓.␈α Such␈αstatements␈αcan␈αbe␈αgiven␈αa␈αprecise␈αmeaning␈αin␈αterms␈αof␈αa␈α
system␈αof

␈↓ α∧␈↓interacting␈α∩automata␈α∩as␈α∩is␈α∩discussed␈α∩in␈α∩detail␈α⊃in␈α∩(McCarthy␈α∩and␈α∩Hayes␈α∩1970).␈α∩ We␈α∩say␈α⊃that

␈↓ α∧␈↓Automaton␈α
1␈αcan␈α
put␈αAutomaton␈α
3␈αin␈α
state␈α5␈α
at␈α
time␈α10␈α
by␈αasking␈α
a␈αquestion␈α
about␈αan␈α
automaton

␈↓ α∧␈↓system␈α
in␈α
which␈α
the␈α∞outputs␈α
from␈α
Automaton␈α
1␈α
are␈α∞replaced␈α
by␈α
inputs␈α
from␈α
outside␈α∞the␈α
system.

␈↓ α∧␈↓Namely,␈α∀we␈α∪ask␈α∀whether␈α∪there␈α∀is␈α∪a␈α∀sequence␈α∪of␈α∀inputs␈α∪to␈α∀the␈α∪new␈α∀system␈α∪that␈α∀␈↓↓would␈↓␈α∪put

␈↓ α∧␈↓Automaton␈α∞3␈α∞in␈α∞state␈α∞5␈α∞at␈α
time␈α∞10;␈α∞if␈α∞yes,␈α∞we␈α∞say␈α∞that␈α
Automaton␈α∞1␈α∞␈↓↓could␈↓␈α∞do␈α∞it␈α∞in␈α∞the␈α
original

␈↓ α∧␈↓system␈α∞even␈α∞though␈α∞we␈α
may␈α∞be␈α∞able␈α∞to␈α
show␈α∞that␈α∞he␈α∞won't␈α
emit␈α∞the␈α∞necessary␈α∞outputs.␈α∞ In␈α
that

␈↓ α∧␈↓paper, we argue that this definition corresponds to the intuitive notion of ␈↓↓X can do Y.␈↓.

␈↓ α∧␈↓␈↓ αTWhat␈α
was␈α
not␈αnoted␈α
in␈α
that␈α
paper␈αis␈α
that␈α
modelling␈α
the␈αsituation␈α
by␈α
the␈α
particular␈αsystem␈α
of

␈↓ α∧␈↓interacting␈α⊃automata␈α⊃is␈α⊃an␈α⊃approximation,␈α⊃and␈α⊃the␈α⊃sentences␈α⊃involving␈α⊃␈↓↓can␈↓␈α⊃derived␈α⊃from␈α⊂the

␈↓ α∧␈↓approximation cannot necessarily be translated into single assertions about the real world.


␈↓ α∧␈↓␈↓ εu10␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ αTI␈αcontend␈αthat␈αthe␈αstatement,␈α␈↓↓"I␈αcan␈αgo␈αskiing␈αtomorrow,␈αbut␈αI␈αdon't␈αintend␈αto,␈αbecause␈αI␈αwant

␈↓ α∧␈↓↓to finish this paper"␈↓ has the following properties:

␈↓ α∧␈↓␈↓ αT1.␈α∪It␈α∪has␈α∪a␈α∪precise␈α∪meaning␈α∪in␈α∪a␈α∪certain␈α∪approximate␈α∪theory␈α∪of␈α∪in␈α∪which␈α∪I␈α∪and␈α∩my

␈↓ α∧␈↓environment are considered as collections of interacting automata.

␈↓ α∧␈↓␈↓ αT2.␈αIt␈αcannot␈αbe␈αdirectly␈αinterpreted␈αas␈αa␈αstatement␈αabout␈αthe␈αworld␈αitself,␈αbecause␈αit␈αcan't␈αbe

␈↓ α∧␈↓stated␈α∞in␈α∞what␈α
total␈α∞configurations␈α∞of␈α
the␈α∞world␈α∞the␈α
success␈α∞of␈α∞my␈α
attempt␈α∞to␈α∞go␈α
skiing␈α∞is␈α∞to␈α
be

␈↓ α∧␈↓validated.

␈↓ α∧␈↓␈↓ αT3.␈α∃The␈α∀approximate␈α∃theory␈α∀within␈α∃which␈α∀the␈α∃statement␈α∀is␈α∃meaningful␈α∀may␈α∃have␈α∀an

␈↓ α∧␈↓objectively␈α
preferred␈αstatus␈α
in␈α
that␈αit␈α
may␈αbe␈α
the␈α
only␈αtheory␈α
not␈αenormously␈α
more␈α
complex␈αthat

␈↓ α∧␈↓enables my actions and mental states to be predicted.

␈↓ α∧␈↓␈↓ αT4. The statement may convey useful information.

␈↓ α∧␈↓␈↓ αTOur␈αconclusion␈αis␈α
that␈αthe␈αstatement␈α
is␈α␈↓αtrue␈↓,␈αbut␈α
in␈αa␈αsense␈α
that␈αdepends␈αessentially␈α
on␈αthe

␈↓ α∧␈↓approximate␈α∂theory,␈α∞and␈α∂that␈α∂this␈α∞intellectual␈α∂situation␈α∞is␈α∂normal␈α∂and␈α∞should␈α∂be␈α∂accepted.␈α∞ We

␈↓ α∧␈↓further␈α⊂conclude␈α⊂that␈α⊃the␈α⊂old-fashioned␈α⊂common␈α⊃sense␈α⊂analysis␈α⊂of␈α⊃a␈α⊂personality␈α⊂into␈α⊃␈↓↓will␈↓␈α⊂and

␈↓ α∧␈↓␈↓↓intellect␈↓␈α
and␈α
other␈α
components␈α
may␈αbe␈α
valid␈α
and␈α
might␈α
be␈αput␈α
on␈α
a␈α
precise␈α
scientific␈αfooting␈α
using

␈↓ α∧␈↓␈↓↓definitions relative to an approximate theory␈↓.


















␈↓ α∧␈↓␈↓ εu11␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ βjEXAMPLES OF SYSTEMS WITH MENTAL QUALITIES

␈↓ α∧␈↓␈↓ αTLet␈α
us␈α
consider␈α
some␈αexamples␈α
of␈α
machines␈α
and␈α
programs␈αto␈α
which␈α
we␈α
may␈α
ascribe␈αbelief

␈↓ α∧␈↓and goal structures.

␈↓ α∧␈↓␈↓ αT1.␈α
 ␈↓αThermostats.␈↓␈α
Ascribing␈α
beliefs␈α
to␈α
simple␈α
thermostats␈α
is␈α
not␈α
really␈α
necessary,␈αbecause␈α
their

␈↓ α∧␈↓operation␈αcan␈αbe␈αwell␈αunderstood␈αwithout␈αit.␈α However,␈αtheir␈αvery␈αsimplicity␈αmakes␈αit␈αclearer␈α
what

␈↓ α∧␈↓is␈α
involved␈α∞in␈α
the␈α∞ascription,␈α
and␈α∞we␈α
maintain␈α
(to␈α∞some␈α
extent␈α∞as␈α
a␈α∞provocation␈α
aimed␈α∞at␈α
those

␈↓ α∧␈↓who␈αregard␈αattribution␈αof␈αbeliefs␈αto␈αmachines␈α
as␈αmere␈αintellectual␈αsloppiness)␈αthat␈αthe␈αascription␈α
is

␈↓ α∧␈↓legitimate even if unnecessary.␈↓∧5␈↓

␈↓ α∧␈↓␈↓ αT First␈αlet␈αus␈αconsider␈αa␈αsimple␈αthermostat␈αthat␈αturns␈αoff␈αthe␈αheat␈αwhen␈αthe␈αtemperature␈αis␈αa

␈↓ α∧␈↓degree␈αabove␈αthe␈αtemperature␈αset␈αon␈αthe␈αthermostat,␈αturns␈αon␈αthe␈αheat␈αwhen␈αthe␈αtemperature␈αis␈αa

␈↓ α∧␈↓degree␈α
below␈α
the␈α
desired␈α
temperature,␈α
and␈α
leaves␈α
the␈α
heat␈α
as␈α
is␈α
when␈α
the␈α
temperature␈α
is␈α
in␈α
the␈α
two

␈↓ α∧␈↓degree␈αrange␈αaround␈α
the␈αdesired␈αtemperature.␈αThe␈α
simplest␈αbelief␈αpredicate␈α␈↓↓B(s,p)␈↓␈α
ascribes␈αbelief

␈↓ α∧␈↓to␈α∞only␈α∞two␈α∞sentences:␈α∞"The␈α∞room␈α
is␈α∞too␈α∞cold"␈α∞and␈α∞"The␈α∞room␈α
is␈α∞too␈α∞hot",␈α∞and␈α∞these␈α∞beliefs␈α
are

␈↓ α∧␈↓assigned␈α
to␈αstates␈α
of␈αthe␈α
thermostat␈α
so␈αthat␈α
in␈αthe␈α
two␈α
degree␈αrange,␈α
neither␈αis␈α
believed.␈α When␈α
the

␈↓ α∧␈↓thermostat␈αbelieves␈αthe␈α
room␈αis␈αtoo␈αcold␈α
or␈αtoo␈αhot␈αit␈α
sends␈αa␈αmessage␈αto␈α
that␈αeffect␈αto␈αthe␈α
furnace.

␈↓ α∧␈↓A␈αslightly␈αmore␈αcomplex␈αbelief␈αpredicate␈αcould␈αalso␈αbe␈αused␈αin␈αwhich␈αthe␈αthermostat␈αhas␈αa␈αbelief

␈↓ α∧␈↓about␈α
what␈αthe␈α
temperature␈α
should␈αbe␈α
and␈α
another␈αbelief␈α
about␈α
what␈αit␈α
is.␈α
 It␈αis␈α
not␈α
clear␈αwhich␈α
is

␈↓ α∧␈↓better,␈αbut␈α
if␈αwe␈α
wished␈αto␈α
consider␈αpossible␈α
errors␈αin␈α
the␈αthermometer,␈α
the␈αthen␈α
we␈αwould␈αhave␈α
to

␈↓ α∧␈↓be␈αable␈αto␈αascribe␈αbeliefs␈αabout␈αwhat␈αthe␈αtemperature␈αis.␈αWe␈αdo␈αnot␈αascribe␈αto␈αit␈αany␈αother␈α
beliefs;

␈↓ α∧␈↓it␈α
has␈α
no␈α
opinion␈α
even␈α
about␈αwhether␈α
the␈α
heat␈α
is␈α
on␈α
or␈αoff␈α
or␈α
about␈α
the␈α
weather␈α
or␈α
about␈αwho

␈↓ α∧␈↓won␈αthe␈α
battle␈αof␈αWaterloo.␈α
 Moreover,␈αit␈αhas␈α
no␈αintrospective␈α
beliefs,␈αi.e.␈α it␈α
doesn't␈αbelieve␈αthat␈α
it

␈↓ α∧␈↓believes the room is too hot.

␈↓ α∧␈↓␈↓ αTThe␈α⊃temperature␈α⊂control␈α⊃system␈α⊂in␈α⊃my␈α⊃house␈α⊂may␈α⊃be␈α⊂described␈α⊃as␈α⊃follows:␈α⊂Thermostats




␈↓ α∧␈↓␈↓ εu12␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓upstairs␈αand␈α
downstairs␈αtell␈α
the␈αcentral␈α
system␈αto␈αturn␈α
on␈αor␈α
shut␈αoff␈α
hot␈αwater␈α
flow␈αto␈αthese␈α
areas.

␈↓ α∧␈↓A␈αcentral␈αwater-temperature␈α
thermostat␈αtells␈αthe␈α
furnace␈αto␈αturn␈α
on␈αor␈αoff␈α
thus␈αkeeping␈αthe␈α
central

␈↓ α∧␈↓hot␈α∞water␈α∞reservoir␈α∂at␈α∞the␈α∞right␈α∞temperture.␈α∂ Recently␈α∞it␈α∞was␈α∞too␈α∂hot␈α∞upstairs,␈α∞and␈α∂the␈α∞question

␈↓ α∧␈↓arose␈αas␈αto␈αwhether␈αthe␈αupstairs␈αthermostat␈αmistakenly␈α␈↓↓believed␈↓␈αit␈αwas␈αtoo␈αcold␈αupstairs␈αor␈α
whether

␈↓ α∧␈↓the␈α∂furnace␈α∂thermostat␈α∂mistakenly␈α∂␈↓↓believed␈α∂␈↓␈α∂the␈α∂water␈α∂was␈α∂too␈α∂cold.␈α∂ It␈α∂turned␈α∂out␈α∂that␈α∞neither

␈↓ α∧␈↓mistake␈α⊃was␈α⊂made;␈α⊃the␈α⊃downstairs␈α⊂controller␈α⊃␈↓↓tried␈↓␈α⊂to␈α⊃turn␈α⊃off␈α⊂the␈α⊃flow␈α⊂of␈α⊃water␈α⊃but␈α⊂␈↓↓couldn't␈↓,

␈↓ α∧␈↓because␈α∞the␈α∞valve␈α∂was␈α∞stuck.␈α∞ The␈α∞plumber␈α∂came␈α∞once␈α∞and␈α∞found␈α∂the␈α∞trouble,␈α∞and␈α∂came␈α∞again

␈↓ α∧␈↓when␈α
a␈αreplacement␈α
valve␈α
was␈αordered.␈α
 Since␈αthe␈α
services␈α
of␈αplumbers␈α
are␈αincreasingly␈α
expensive,

␈↓ α∧␈↓and␈αmicrocomputers␈αare␈αincreasingly␈αcheap,␈αone␈αis␈αled␈αto␈αdesign␈αa␈αtemperature␈αcontrol␈αsystem␈αthat

␈↓ α∧␈↓would ␈↓↓know␈↓ a lot more about the thermal state of the house and its own state of health.

␈↓ α∧␈↓␈↓ αTIn␈αthe␈αfirst␈αplace,␈αwhile␈αthe␈αsystem␈α␈↓↓couldn't␈↓␈αturn␈αoff␈αthe␈αflow␈αof␈αhot␈αwater␈αupstairs,␈αthere␈αis

␈↓ α∧␈↓no␈α⊃reason␈α⊃to␈α⊃ascribe␈α⊃to␈α⊃it␈α⊃the␈α⊃␈↓↓knowledge␈↓␈α⊃that␈α⊃it␈α⊃couldn't,␈α⊃and␈α⊃␈↓↓a␈α⊃fortiori␈↓␈α⊃it␈α⊃had␈α⊃no␈α∩ability␈α⊃to

␈↓ α∧␈↓␈↓↓communicate␈↓␈α
this␈α
␈↓↓fact␈↓␈αor␈α
to␈α
take␈α
it␈αinto␈α
account␈α
in␈α
controlling␈αthe␈α
system.␈α
 A␈α
more␈αadvanced␈α
system

␈↓ α∧␈↓would␈α
know␈α
whether␈α
the␈α∞␈↓↓actions␈↓␈α
it␈α
␈↓↓attempted␈↓␈α
succeeded,␈α∞and␈α
it␈α
would␈α
communicate␈α∞failures␈α
and

␈↓ α∧␈↓adapt␈αto␈αthem.␈α (We␈αadapted␈αto␈αthe␈αfailure␈αby␈αturning␈αoff␈αthe␈αwhole␈αsystem␈αuntil␈αthe␈αwhole␈αhouse

␈↓ α∧␈↓cooled␈αoff␈αand␈αthen␈αletting␈αthe␈αtwo␈αparts␈αwarm␈αup␈αtogether.␈α The␈αpresent␈αsystem␈αhas␈αthe␈α␈↓↓physical

␈↓ α∧␈↓↓capability␈↓ of doing this even if it hasn't the ␈↓↓knowledge␈↓ or the ␈↓↓will␈↓.

␈↓ α∧␈↓␈↓ αT2.␈α
␈↓αSelf-reproducing␈α∞intelligent␈α
configurations␈α∞in␈α
a␈α
cellular␈α∞automaton␈α
world.␈↓␈α∞A␈α
␈↓↓cellular␈↓

␈↓ α∧␈↓␈↓↓automaton␈↓␈αsystem␈αassigns␈αto␈αeach␈αvertex␈αin␈αa␈αcertain␈αgraph␈αa␈αfinite␈αautomaton.␈α The␈αstate␈αof␈αeach

␈↓ α∧␈↓automaton␈αat␈αtime␈α␈↓↓t+1␈↓␈αdepends␈αon␈αits␈αstate␈αat␈αtime␈α␈↓↓t␈↓␈αand␈αthe␈αstates␈αof␈αits␈αneighbors␈αat␈αtime␈α␈↓↓t␈↓.␈αThe

␈↓ α∧␈↓most␈α∞common␈α
graph␈α∞is␈α
the␈α∞array␈α∞of␈α
points␈α∞␈↓↓(x,y)␈↓␈α
in␈α∞the␈α∞plane␈α
with␈α∞integer␈α
co-ordinates␈α∞␈↓↓x␈↓␈α∞and␈α
␈↓↓y.␈↓

␈↓ α∧␈↓The␈αfirst␈α
use␈αof␈αcellular␈α
automata␈αwas␈α
by␈αvon␈αNeumann␈α
(196?)␈αwho␈αfound␈α
a␈α27␈α
state␈αautomaton

␈↓ α∧␈↓that␈αcould␈αbe␈αused␈αto␈αconstruct␈αself-reproducing␈αconfiguration␈αthat␈αwere␈αalso␈αuniversal␈αcomputers.




␈↓ α∧␈↓␈↓ εu13␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓The␈αbasic␈αautomaton␈αin␈αvon␈αNeumann's␈αsystem␈αhad␈αa␈αdistinguished␈αstate␈αcalled␈α0␈αand␈αa␈αpoint␈αin

␈↓ α∧␈↓state␈α∩0␈α∪whose␈α∩four␈α∩neighbors␈α∪were␈α∩also␈α∪in␈α∩that␈α∩state␈α∪would␈α∩remain␈α∩in␈α∪state␈α∩0.␈α∪ The␈α∩initial

␈↓ α∧␈↓configurations␈α⊂considered␈α⊂had␈α⊂all␈α⊂but␈α⊂a␈α⊂finite␈α⊂number␈α⊂of␈α⊂cells␈α⊂in␈α⊂state␈α⊂0,␈α⊂and,␈α⊂of␈α⊃course,␈α⊂this

␈↓ α∧␈↓property would persist although the number of non-zero cells might grow indefinitely with time.

␈↓ α∧␈↓␈↓ αTThe␈α∂self-reproducing␈α⊂system␈α∂used␈α∂the␈α⊂states␈α∂of␈α∂a␈α⊂long␈α∂strip␈α∂of␈α⊂non-zero␈α∂cells␈α∂as␈α⊂a␈α∂"tape"

␈↓ α∧␈↓containing␈α
instructions␈α∞to␈α
a␈α∞"universal␈α
constructor"␈α
configuration␈α∞that␈α
would␈α∞construct␈α
a␈α∞copy␈α
of

␈↓ α∧␈↓the␈αconfiguration␈αto␈αbe␈αreproduced␈αbut␈αwith␈αeach␈αcell␈αin␈αa␈αpassive␈αstate␈αthat␈αwould␈αpersist␈αas␈α
long

␈↓ α∧␈↓as␈α⊂its␈α⊂neighbors␈α⊂were␈α⊂also␈α⊂in␈α⊂passive␈α⊂states.␈α⊂ After␈α⊂the␈α⊂construction␈α⊂phase,␈α⊂the␈α⊂tape␈α⊃would␈α⊂be

␈↓ α∧␈↓copied␈αto␈α
make␈αthe␈αtape␈α
for␈αthe␈αnew␈α
machine,␈αand␈α
then␈αthe␈αnew␈α
system␈αwould␈αbe␈α
set␈αin␈αmotion␈α
by

␈↓ α∧␈↓activating␈α
one␈αof␈α
its␈αcells.␈α
The␈α
new␈αsystem␈α
would␈αthen␈α
move␈α
away␈αfrom␈α
its␈αmother␈α
and␈αthe␈α
process

␈↓ α∧␈↓would␈α∀start␈α∪over.␈α∀ The␈α∪purpose␈α∀of␈α∪the␈α∀design␈α∪was␈α∀to␈α∪demonstrate␈α∀that␈α∀arbitrarily␈α∪complex

␈↓ α∧␈↓configurations␈α∂could␈α∂be␈α⊂self-reproducing␈α∂-␈α∂the␈α∂complexity␈α⊂being␈α∂assured␈α∂by␈α∂also␈α⊂requiring␈α∂that

␈↓ α∧␈↓they be universal computers.

␈↓ α∧␈↓␈↓ αTSince␈α_von␈α_Neumann's␈α_time,␈α_simpler␈α_basic␈α_cells␈α_admitting␈α→self-reproducing␈α_universal

␈↓ α∧␈↓computers␈α
have␈α
been␈α
discovered.␈α∞ Perhaps␈α
the␈α
simplest␈α
is␈α
the␈α∞two␈α
state␈α
Life␈α
automaton␈α∞of␈α
John

␈↓ α∧␈↓Conway␈α(196?).␈α The␈αstate␈αof␈αa␈αcell␈α
at␈αtime␈α␈↓↓t+1␈↓␈αis␈αdetermined␈αits␈αstate␈α
at␈αtime␈α␈↓↓t␈↓␈αand␈αthe␈αstates␈αof␈α
its

␈↓ α∧␈↓eight␈αneighbors␈α
at␈αtime␈α␈↓↓t.␈↓␈α
Namely,␈αif␈αa␈α
point␈αwhose␈αstate␈α
is␈α0␈αwill␈α
change␈αto␈αstate␈α
1␈αif␈αexactly␈α
three

␈↓ α∧␈↓of␈α
its␈α
neighbors␈αare␈α
in␈α
state␈α1.␈α
 A␈α
point␈αwhose␈α
state␈α
is␈α
1␈αwill␈α
remain␈α
in␈αstate␈α
1␈α
if␈αtwo␈α
or␈α
three␈αof␈α
its

␈↓ α∧␈↓neighbors are in state 1.  In all other cases the state becomes or remains 0.

␈↓ α∧␈↓␈↓ αTConway's␈αinitial␈αintent␈αwas␈αto␈αmodel␈αa␈αbirth␈αand␈αdeath␈αprocess␈αwhereby␈αa␈αcell␈αis␈αborn␈α(goes

␈↓ α∧␈↓into␈αstate␈α1)␈αif␈αit␈αhas␈α
the␈αright␈αnumber␈αof␈αliving␈αneighbors␈α
(namely␈αthree)␈αand␈αdies␈αif␈αit␈α
is␈αeither

␈↓ α∧␈↓too␈α⊂lonely␈α⊂(has␈α⊂none␈α⊂or␈α⊂one␈α⊂neighbor␈α⊂in␈α⊂state␈α⊂1)␈α⊂or␈α⊂is␈α⊂overcrowded␈α⊂(has␈α⊂four␈α⊂or␈α⊂more␈α∂living

␈↓ α∧␈↓neighbors).␈α∞ He␈α∞also␈α∞asked␈α∞whether␈α∞infinitely␈α∞growing␈α∞configurations␈α∞were␈α∞possible,␈α∞and␈α∞Gosper




␈↓ α∧␈↓␈↓ εu14␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓first␈α∩proved␈α∩that␈α∩there␈α∩were.␈α∩ More␈α∪than␈α∩that,␈α∩it␈α∩was␈α∩shown␈α∩that␈α∪self-reproducing␈α∩universal

␈↓ α∧␈↓computers could be built up as Life configurations.

␈↓ α∧␈↓␈↓ αTLet␈α
us␈αnow␈α
imagine␈αthat␈α
there␈αare␈α
a␈αnumber␈α
of␈αsuch␈α
self-reproducing␈α
universal␈αcomputers

␈↓ α∧␈↓operating␈αin␈αthe␈αLife␈αplane␈αand␈αsuppose␈αthat␈αthey␈αhave␈αbeen␈αprogrammed␈αto␈αstudy␈αthe␈αproperties

␈↓ α∧␈↓of␈α∀their␈α∪world␈α∀and␈α∀to␈α∪communicate␈α∀among␈α∪themselves␈α∀about␈α∀it␈α∪pursuing␈α∀various␈α∀goals␈α∪co-

␈↓ α∧␈↓operatively␈α∪and␈α∪competitively.␈α∪ Let's␈α∪call␈α∪these␈α∪configurations␈α∪robots.␈α∪ In␈α∪some␈α∪respects␈α∪their

␈↓ α∧␈↓intellectual␈α
and␈αscientific␈α
problems␈α
will␈αbe␈α
like␈α
ours,␈αbut␈α
in␈α
one␈αmajor␈α
respect␈α
they␈αwill␈α
live␈α
in␈αa

␈↓ α∧␈↓simpler␈α
world␈α
than␈αours␈α
has␈α
been␈α
shown␈αto␈α
be.␈α
 Namely,␈α
the␈αfundamental␈α
physics␈α
of␈α
their␈αworld␈α
is

␈↓ α∧␈↓that␈αof␈αthe␈αlife␈αautomaton,␈αand␈αthere␈αis␈αno␈αobstacle␈αto␈αeach␈αrobot␈α␈↓↓knowing␈↓␈αthis␈αphysics␈αat␈αleast␈αin

␈↓ α∧␈↓the␈α
sense␈αof␈α
being␈αable␈α
to␈αsimulate␈α
the␈αevolution␈α
of␈αa␈α
life␈αconfiguration␈α
starting␈αin␈α
the␈αinitial␈α
state.

␈↓ α∧␈↓Moreover,␈αif␈αthe␈αinitial␈αstate␈αof␈αthe␈αrobot␈αworld␈αis␈αfinite␈αit␈αcan␈αhave␈αbeen␈αrecorded␈αin␈αeach␈αrobot

␈↓ α∧␈↓in␈α
the␈αbeginning␈α
or␈αelse␈α
recorded␈αon␈α
a␈αstrip␈α
of␈α
cells␈αthat␈α
the␈αrobots␈α
can␈αread.␈α
 (The␈αinfinite␈α
regress

␈↓ α∧␈↓of␈α⊃having␈α⊃to␈α⊂describe␈α⊃the␈α⊃description␈α⊃is␈α⊂avoided␈α⊃by␈α⊃a␈α⊃convention␈α⊂that␈α⊃the␈α⊃description␈α⊃is␈α⊂not

␈↓ α∧␈↓described,␈α∩but␈α∪can␈α∩be␈α∪read␈α∩␈↓↓both␈↓␈α∩as␈α∪a␈α∩description␈α∪of␈α∩the␈α∩world␈α∪␈↓↓and␈↓␈α∩as␈α∪a␈α∩description␈α∪of␈α∩the

␈↓ α∧␈↓description itself.)

␈↓ α∧␈↓␈↓ αTThese␈αrobots␈αthen␈α
know␈αthe␈αinitial␈αstate␈α
of␈αtheir␈αworld␈αand␈α
its␈αlaws␈αof␈α
motion.␈α Therefore,

␈↓ α∧␈↓they␈αcan␈α
simulate␈αas␈αmuch␈α
of␈αtheir␈αworld's␈α
history␈αas␈αthey␈α
want␈αassuming␈αthat␈α
each␈αof␈α
them␈αcan

␈↓ α∧␈↓grow␈αinto␈α
unoccupied␈αspace␈αso␈α
as␈αto␈αhave␈α
memory␈αto␈αstore␈α
the␈αstates␈αof␈α
the␈αworld␈αbeing␈α
simulated.

␈↓ α∧␈↓Of␈α∂course,␈α∂this␈α∞simulation␈α∂is␈α∂much␈α∞slower␈α∂than␈α∂real␈α∂time␈α∞so␈α∂they␈α∂can␈α∞never␈α∂catch␈α∂up␈α∂with␈α∞the

␈↓ α∧␈↓present␈α
let␈α
alone␈αpredict␈α
the␈α
future.␈α
 This␈αis␈α
quite␈α
evident␈α
if␈αwe␈α
imagine␈α
the␈α
simulation␈αcarried␈α
out

␈↓ α∧␈↓in␈α
a␈αstraightforward␈α
way␈αin␈α
which␈αa␈α
list␈αof␈α
currently␈αactive␈α
cells␈αin␈α
the␈αsimulated␈α
world␈αis␈α
updated

␈↓ α∧␈↓according␈α
to␈α
the␈αLife␈α
rule,␈α
but␈α
it␈αalso␈α
applies␈α
to␈α
clever␈αmathematical␈α
methods␈α
that␈α
might␈αpredict

␈↓ α∧␈↓millions␈α⊃of␈α⊃steps␈α∩ahead.␈α⊃ (Some␈α⊃Life␈α∩configurations,␈α⊃e.g.␈α⊃static␈α∩ones␈α⊃or␈α⊃ones␈α∩containing␈α⊃single




␈↓ α∧␈↓␈↓ εu15␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓↓gliders␈↓␈αor␈α␈↓↓cannon␈↓␈αcan␈αhave␈αtheir␈αdistant␈αfutures␈αpredicted␈αwith␈αlittle␈αcomputing.)␈αNamely,␈αif␈αthere

␈↓ α∧␈↓were␈α
an␈αalgorithm␈α
for␈α
such␈αprediction,␈α
a␈α
robot␈αcould␈α
be␈αmade␈α
that␈α
would␈αpredict␈α
its␈α
own␈αfuture

␈↓ α∧␈↓and then disobey the prediction.

␈↓ α∧␈↓␈↓ αTNow␈α∞we␈α
come␈α∞to␈α
the␈α∞point␈α
of␈α∞this␈α∞long␈α
disquisition.␈α∞ Suppose␈α
that␈α∞we␈α
wish␈α∞to␈α∞program␈α
a

␈↓ α∧␈↓robot␈αto␈αbe␈α
successful␈αin␈αthe␈αLife␈α
world␈αin␈αcompetition␈αor␈α
co-operation␈αwith␈αthe␈α
others.␈α Without

␈↓ α∧␈↓any␈αidea␈αof␈αhow␈α
to␈αgive␈αa␈αmathematical␈αproof,␈α
I␈αwill␈αclaim␈αthat␈α
our␈αrobot␈αwill␈αneed␈αprograms␈α
that

␈↓ α∧␈↓ascribe␈αpurposes␈α
and␈αbeliefs␈α
to␈αits␈αfellow␈α
robots␈αand␈α
predict␈αhow␈αthey␈α
will␈αreact␈α
to␈αour␈αrobot's␈α
own

␈↓ α∧␈↓actions␈αby␈αassuming␈αthat␈α␈↓↓they␈αwill␈αact␈αin␈αways␈αthat␈αthey␈αbelieve␈αwill␈αachieve␈αtheir␈αgoals␈↓.␈α Our␈αrobot

␈↓ α∧␈↓can␈α
acquire␈α
these␈α∞mental␈α
theories␈α
in␈α
several␈α∞ways:␈α
First,␈α
we␈α
might␈α∞design␈α
in␈α
such␈α∞programs␈α
and

␈↓ α∧␈↓install␈αthem␈αin␈αthe␈αinitial␈αconfiguration␈αof␈αthe␈αworld.␈α Second,␈αit␈αmight␈αbe␈αprogrammed␈αto␈αacquire

␈↓ α∧␈↓these␈αprograms␈αby␈αinduction␈αfrom␈αits␈αexperience␈αand␈αperhaps␈αpass␈αthem␈αon␈αto␈αothers␈αthrough␈αan

␈↓ α∧␈↓educational␈α
system.␈α
 Third,␈α
it␈α
might␈α
derive␈α
the␈α
psychological␈α
laws␈α
from␈α
the␈α
fundamental␈αphysics

␈↓ α∧␈↓of␈α
the␈α∞world␈α
and␈α
its␈α∞knowledge␈α
of␈α∞the␈α
initial␈α
configuration␈α∞or␈α
it␈α
might␈α∞discover␈α
how␈α∞robots␈α
are

␈↓ α∧␈↓built from Life cells by doing experimental "biology".

␈↓ α∧␈↓␈↓ αTKnowing␈α∩the␈α∩Life␈α∩physics␈α∩without␈α∪some␈α∩information␈α∩about␈α∩the␈α∩initial␈α∪configuration␈α∩is

␈↓ α∧␈↓insufficient␈α
to␈α
derive␈αthe␈α
␈↓↓psychological␈↓␈α
laws,␈α
because␈αrobots␈α
can␈α
be␈α
constructed␈αin␈α
the␈α
Life␈αworld␈α
in

␈↓ α∧␈↓an␈αinfinity␈αof␈αways.␈α This␈αfollows␈αfrom␈αthe␈α"folk␈αtheorem"␈αthat␈αthe␈αLife␈αautomaton␈αis␈αuniversal␈αin

␈↓ α∧␈↓the␈α∞sense␈α∞that␈α∞any␈α
cellular␈α∞automaton␈α∞can␈α∞be␈α∞constructed␈α
by␈α∞taking␈α∞sufficiently␈α∞large␈α∞squares␈α
of

␈↓ α∧␈↓Life cells as the basic cell of the other automaton.␈↓∧6␈↓

␈↓ α∧␈↓␈↓ αTOur␈αown␈αintellectual␈αposition␈αis␈αmore␈αdifficult␈αthan␈αthat␈αof␈αthe␈αLife␈αrobots.␈α We␈αdon't␈α
know

␈↓ α∧␈↓the␈αfundamental␈αphysics␈α
of␈αour␈αworld,␈αand␈α
we␈αcan't␈αeven␈αbe␈α
sure␈αthat␈αits␈αfundamental␈α
physics␈αis

␈↓ α∧␈↓describable␈α∂in␈α∂finite␈α⊂terms.␈α∂ Even␈α∂if␈α∂we␈α⊂knew␈α∂the␈α∂physical␈α∂laws,␈α⊂they␈α∂seem␈α∂to␈α⊂preclude␈α∂precise

␈↓ α∧␈↓knowledge␈α∞of␈α
an␈α∞initial␈α∞state␈α
and␈α∞precise␈α∞calculation␈α
of␈α∞its␈α∞future␈α
both␈α∞for␈α∞quantum␈α
mechanical




␈↓ α∧␈↓␈↓ εu16␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓reasons␈α
and␈α
because␈α
the␈α
continuous␈α
functions␈α
involved␈α
can't␈α
necessarily␈α
be␈α
described␈α
by␈α∞a␈α
finite

␈↓ α∧␈↓amount of information.

␈↓ α∧␈↓␈↓ αTThe␈αpoint␈αof␈αthe␈αcellular␈αautomaton␈αrobot␈α
example␈αis␈αthat␈αmuch␈αof␈αhuman␈αmental␈α
structure

␈↓ α∧␈↓is␈αnot␈αan␈αaccident␈αof␈αevolution␈αor␈αeven␈αof␈αthe␈αphysics␈αof␈αour␈αworld,␈αbut␈αis␈αrequired␈αfor␈αsuccessful

␈↓ α∧␈↓problem␈αsolving␈αbehavior␈αand␈αmust␈αbe␈αdesigned␈αinto␈αor␈αevolved␈αby␈αany␈αsystem␈αthat␈αexhibits␈αsuch

␈↓ α∧␈↓behavior.

␈↓ α∧␈↓␈↓ αT3.␈α→␈↓αComputer␈α→time-sharing␈α→systems.␈↓␈α→These␈α→complicated␈α→computer␈α→programs␈α_allocate

␈↓ α∧␈↓computer␈αtime␈αand␈α
other␈αresources␈αamong␈α
users.␈α They␈αallow␈α
each␈αuser␈αof␈α
the␈αcomputer␈αto␈α
behave

␈↓ α∧␈↓as␈αthough␈αhe␈α
had␈αa␈αcomputer␈αof␈α
his␈αown,␈αbut␈αalso␈α
allow␈αthem␈αto␈αshare␈α
files␈αof␈αdata␈αand␈α
programs

␈↓ α∧␈↓and␈α
to␈α
communicate␈α
with␈α
each␈α
other.␈α
 They␈α
are␈α
often␈α
used␈α
for␈α
many␈α
years␈α
with␈α∞continual␈α
small

␈↓ α∧␈↓changes,␈αand␈αand␈αthe␈αpeople␈αmaking␈αthe␈αchanges␈αand␈αcorrecting␈αerrors␈αare␈αoften␈αdifferent␈αnot␈αthe

␈↓ α∧␈↓original␈αauthors␈αof␈αthe␈αsystem.␈α A␈αperson␈αconfronted␈αwith␈αthe␈αtask␈αof␈αcorrecting␈αa␈αmalfunction␈αor

␈↓ α∧␈↓making a change in a time-sharing system can conveniently use a mentalistic model of the system.

␈↓ α∧␈↓␈↓ αTThus␈α⊂suppose␈α∂a␈α⊂user␈α∂complains␈α⊂that␈α⊂the␈α∂system␈α⊂will␈α∂not␈α⊂run␈α∂his␈α⊂program.␈α⊂ Perhaps␈α∂the

␈↓ α∧␈↓system␈αbelieves␈α
that␈αhe␈αdoesn't␈α
want␈αto␈α
run,␈αperhaps␈αit␈α
persistently␈αbelieves␈αthat␈α
he␈αhas␈α
just␈αrun,

␈↓ α∧␈↓perhaps␈αit␈αbelieves␈αthat␈αhis␈αquota␈αof␈αcomputer␈αresources␈αis␈αexhausted,␈αor␈αperhaps␈αit␈αbelieves␈αthat

␈↓ α∧␈↓his␈αprogram␈αrequires␈αa␈αresource␈αthat␈αis␈αunavailable.␈α Testing␈αthese␈αhypotheses␈αcan␈αoften␈α
be␈αdone

␈↓ α∧␈↓with surprisingly little understanding of the internal workings of the program.

␈↓ α∧␈↓␈↓ αT→→→→→There will be more examples here of the belief of time-sharing systems.←←←

␈↓ α∧␈↓␈↓ αT4.␈α∩␈↓αPrograms␈α⊃designed␈α∩to␈α⊃reason.␈↓␈α∩Suppose␈α∩we␈α⊃explicitly␈α∩design␈α⊃a␈α∩program␈α∩to␈α⊃represent

␈↓ α∧␈↓information␈αby␈αsentences␈αin␈αa␈αcertain␈αlanguage␈αstored␈αin␈αthe␈αmemory␈αof␈αthe␈αcomputer␈αand␈αdecide

␈↓ α∧␈↓what␈αto␈α
do␈αby␈α
making␈αinferences,␈αand␈α
doing␈αwhat␈α
it␈αconcludes␈αwill␈α
advance␈αits␈α
goals.␈α Naturally,

␈↓ α∧␈↓we␈αwould␈αhope␈αthat␈αour␈αprevious␈αsecond␈αorder␈αdefinition␈αof␈αbelief␈αwill␈α"approve␈αof"␈αa␈α␈↓↓B(p,s)␈↓␈αthat




␈↓ α∧␈↓␈↓ εu17␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓ascribed␈α∩to␈α∩the␈α⊃program␈α∩believing␈α∩the␈α⊃sentences␈α∩explicitly␈α∩built␈α⊃in.␈α∩ We␈α∩would␈α∩be␈α⊃somewhat

␈↓ α∧␈↓embarassed␈αif␈αsomeone␈αwere␈αto␈αshow␈αthat␈α
our␈αsecond␈αorder␈αdefinition␈αapproved␈αas␈αwell␈α
or␈αbetter

␈↓ α∧␈↓of an entirely different set of beliefs.

␈↓ α∧␈↓␈↓ αTSuch a program was first proposed in (McCarthy 1960), and here is how it might work:

␈↓ α∧␈↓␈↓ αTInformation␈αabout␈αthe␈αworld␈αis␈αstored␈αin␈αa␈αwide␈αvariety␈αof␈αdata␈αstructures.␈α For␈αexample,␈αa

␈↓ α∧␈↓visual␈α∂scene␈α∂received␈α∂by␈α∂a␈α∂TV␈α∂camera␈α∂may␈α∂be␈α∂represented␈α∂by␈α∂a␈α∂512x512x3␈α∂array␈α⊂of␈α∂numbers

␈↓ α∧␈↓representing␈αthe␈αintensities␈αof␈αthree␈α
colors␈αat␈αthe␈αpoints␈αof␈α
the␈αvisual␈αfield.␈α At␈αanother␈α
level,␈αthe

␈↓ α∧␈↓same␈αscene␈α
may␈αbe␈α
represented␈αby␈α
a␈αlist␈α
of␈αregions,␈αand␈α
at␈αa␈α
further␈αlevel␈α
there␈αmay␈α
be␈αa␈α
list␈αof

␈↓ α∧␈↓physical␈α
objects␈α
and␈αtheir␈α
parts␈α
together␈αwith␈α
other␈α
information␈αabout␈α
these␈α
objects␈αobtained␈α
from

␈↓ α∧␈↓non-visual␈αsources.␈α Moreover,␈αinformation␈αabout␈αhow␈αto␈αsolve␈αvarious␈αkinds␈αof␈αproblems␈αmay␈αbe

␈↓ α∧␈↓represented by programs in some programming language.

␈↓ α∧␈↓␈↓ αTHowever,␈α⊂all␈α⊂the␈α⊂above␈α∂representations␈α⊂are␈α⊂subordinate␈α⊂to␈α∂a␈α⊂collection␈α⊂of␈α⊂sentences␈α⊂in␈α∂a

␈↓ α∧␈↓suitable␈α∂first␈α∂order␈α∂language␈α∞that␈α∂includes␈α∂set␈α∂theory.␈α∂ By␈α∞subordinate,␈α∂we␈α∂mean␈α∂that␈α∂there␈α∞are

␈↓ α∧␈↓sentences␈α
that␈α
tell␈α
what␈α
the␈α
data␈α
structures␈α
represent␈α
and␈α
what␈α
the␈α
programs␈α
do.␈α
 New␈αsentences

␈↓ α∧␈↓can␈αarise␈αby␈αa␈αvariety␈αof␈αprocesses:␈αinference␈αfrom␈αsentences␈αalready␈αpresent,␈αby␈αcomputation␈αfrom

␈↓ α∧␈↓the data structures representing observations, ...

␈↓ α∧␈↓␈↓ αT→→→→→There will be more here about what mental qualities should be programmed.←←←
















␈↓ α∧␈↓␈↓ εu18␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ ∧`"GLOSSARY" OF MENTAL QUALITIES

␈↓ α∧␈↓␈↓ αTIn␈α
this␈α
section␈α
we␈α∞give␈α
short␈α
"definitions"␈α
for␈α
machines␈α∞of␈α
a␈α
collection␈α
of␈α∞mental␈α
qualities.

␈↓ α∧␈↓We␈αinclude␈αa␈αnumber␈αof␈α
terms␈αwhich␈αgive␈αus␈αdifficulty␈α
with␈αan␈αindication␈αof␈αwhat␈αthe␈α
difficulties

␈↓ α∧␈↓seem to be.

␈↓ α∧␈↓␈↓ αT1.␈α
␈↓αActions␈↓.␈α
 We␈α
want␈α
to␈α
distinguish␈α
the␈α
actions␈α
of␈α
a␈α
being␈α
from␈α
events␈α
that␈α
occur␈α
in␈α
its␈α
body

␈↓ α∧␈↓and␈α
that␈αaffect␈α
the␈α
outside␈αworld.␈α
 For␈α
example,␈αwe␈α
wish␈αto␈α
distinguish␈α
a␈αrandom␈α
twitch␈α
from␈αa

␈↓ α∧␈↓purposeful␈α∞movement.␈α
 This␈α∞is␈α∞not␈α
difficult␈α∞␈↓↓relative␈α∞to␈α
a␈α∞theory␈α∞of␈α
belief␈α∞that␈α∞includes␈α
intentions␈↓.

␈↓ α∧␈↓One's␈α⊂purposeful␈α⊃actions␈α⊂are␈α⊂those␈α⊃that␈α⊂would␈α⊃have␈α⊂been␈α⊂different␈α⊃had␈α⊂one's␈α⊃intentions␈α⊂been

␈↓ α∧␈↓different.␈α∞ This␈α
requires␈α∞that␈α
the␈α∞theory␈α
of␈α∞belief␈α
have␈α∞sufficient␈α
Cartesian␈α∞product␈α∞structure␈α
so

␈↓ α∧␈↓that␈αthe␈αcounterfactual␈αconditional␈α`"if␈αits␈α
intentions␈αhad␈αbeen␈αdifferent"␈αis␈αdefined␈αin␈α
the␈αtheory.

␈↓ α∧␈↓As␈α
explained␈α
in␈α∞the␈α
section␈α
on␈α
definitions␈α∞relative␈α
to␈α
an␈α
approximate␈α∞theory,␈α
it␈α
is␈α∞not␈α
necessary

␈↓ α∧␈↓that the counterfactual be given a meaning in terms of the real world.

␈↓ α∧␈↓␈↓ αT2. ␈↓αIntrospection and self-knowledge.␈↓

␈↓ α∧␈↓␈↓ αTWe␈αsay␈αthat␈αa␈α
machine␈αintrospects␈αwhen␈αit␈α
comes␈αto␈αhave␈αbeliefs␈α
about␈αits␈αown␈αmental␈α
state.

␈↓ α∧␈↓A␈α∂simple␈α∂form␈α∂of␈α∞introspection␈α∂takes␈α∂place␈α∂when␈α∂a␈α∞program␈α∂determines␈α∂whether␈α∂it␈α∂has␈α∞certain

␈↓ α∧␈↓information␈αand␈αif␈αnot␈αasks␈αfor␈αit.␈α Often␈αan␈αoperating␈αsystem␈αwill␈αcompute␈αa␈αcheck␈αsum␈αof␈αitself

␈↓ α∧␈↓every few minutes to verify that it hasn't been changed by a software or hardware malfunction.

␈↓ α∧␈↓␈↓ αTIn␈α⊂principle,␈α⊂introspection␈α⊂is␈α⊂easier␈α⊂for␈α⊂computer␈α⊂programs␈α⊂than␈α⊂for␈α⊂people,␈α⊂because␈α⊂the

␈↓ α∧␈↓entire␈α⊂memory␈α⊂in␈α⊂which␈α⊂programs␈α⊂and␈α⊂data␈α⊃are␈α⊂stored␈α⊂is␈α⊂available␈α⊂for␈α⊂inspection.␈α⊂ In␈α⊃fact,␈α⊂a

␈↓ α∧␈↓computer␈αprogram␈αcan␈αbe␈αmade␈αto␈αpredict␈αhow␈αit␈αwould␈αreact␈αto␈αparticular␈αinputs␈αprovided␈αit␈α
has

␈↓ α∧␈↓enough␈α
free␈α
storage␈α
to␈αperform␈α
the␈α
calculation.␈α
 This␈α
situation␈αsmells␈α
of␈α
paradox,␈α
and␈α
there␈αis␈α
one.

␈↓ α∧␈↓Namely,␈α∂if␈α∂a␈α∂program␈α∂could␈α∂predict␈α∂its␈α∂own␈α∂actions␈α∞in␈α∂less␈α∂time␈α∂than␈α∂it␈α∂takes␈α∂to␈α∂carry␈α∂out␈α∞the

␈↓ α∧␈↓action,␈α
it␈α
could␈α
refuse␈α
to␈α
do␈α
what␈α
it␈α
has␈α
predicted␈α
for␈α
itself.␈α
 This␈α
only␈α
shows␈α
that␈α
self-simulation␈α
is

␈↓ α∧␈↓necessarily a slow process, and this is not surprising.


␈↓ α∧␈↓␈↓ εu19␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ αTHowever,␈α
present␈α
programs␈α
do␈α
little␈α
interesting␈α
introspection.␈α
 This␈α
is␈α
just␈α
a␈α
matter␈α∞of␈α
the

␈↓ α∧␈↓undeveloped␈αstate␈αof␈αartificial␈αintelligence;␈αprogrammers␈αdon't␈αyet␈αknow␈αhow␈αto␈αmake␈αa␈αcomputer

␈↓ α∧␈↓program look at itself in a useful way.

␈↓ α∧␈↓␈↓ αT3.␈α ␈↓αConsciousness␈αand␈αself-consciousness␈↓.␈α In␈αaccordance␈αwith␈αthe␈αgeneral␈αapproach␈αof␈αthis

␈↓ α∧␈↓paper,␈αa␈αbeing␈αis␈αconsidered␈αself-conscious␈αiff␈αit␈αhas␈αcertain␈αbeliefs␈αabout␈αitself.␈αHowever,␈αwe␈αmust

␈↓ α∧␈↓remember␈αthat␈αbeliefs␈αare␈αtaken␈αas␈αsentences␈αin␈αour␈αlanguage,␈αand␈αby␈αascribing␈αbeliefs␈αwe␈αare␈αnot

␈↓ α∧␈↓asserting that the being uses that language directly or any other language.

␈↓ α∧␈↓␈↓ αTHere␈α⊃is␈α⊃a␈α⊃hypothesis␈α⊃arising␈α⊂from␈α⊃artificial␈α⊃intelligence␈α⊃concerning␈α⊃the␈α⊃relation␈α⊂between

␈↓ α∧␈↓language␈α
and␈α
thought.␈α Imagine␈α
a␈α
person␈αor␈α
machine␈α
that␈αrepresents␈α
information␈α
internally␈α
in␈αa

␈↓ α∧␈↓huge␈αnetwork.␈α Each␈αnode␈αof␈αthe␈αnetwork␈αhas␈αreferences␈αto␈αother␈αnodes␈αthrough␈αrelations.␈α (If␈αthe

␈↓ α∧␈↓system␈α
has␈α
a␈α
variable␈α
collection␈α
of␈α
relations,␈α
then␈α
the␈α
relations␈α
have␈α
to␈α
be␈α
represented␈α
by␈αnodes,

␈↓ α∧␈↓and␈α
we␈α∞get␈α
a␈α
symmetrical␈α∞theory␈α
if␈α
we␈α∞suppose␈α
that␈α∞each␈α
node␈α
is␈α∞connected␈α
to␈α
a␈α∞set␈α
of␈α∞pairs␈α
of

␈↓ α∧␈↓other␈α∩nodes).␈α∪ We␈α∩can␈α∪imagine␈α∩this␈α∪structure␈α∩to␈α∪have␈α∩a␈α∪long␈α∩term␈α∪part␈α∩and␈α∪also␈α∩extremely

␈↓ α∧␈↓temporary␈α∩parts␈α⊃representing␈α∩current␈α⊃␈↓↓thoughts␈↓.␈α∩ Naturally,␈α∩each␈α⊃being␈α∩has␈α⊃a␈α∩its␈α∩own␈α⊃network

␈↓ α∧␈↓depending␈αon␈αits␈αown␈αexperience.␈αA␈αthought␈αis␈αthen␈αa␈αtemporary␈αnode␈αcurrently␈αbeing␈αreferenced

␈↓ α∧␈↓by␈α∞the␈α∞mechanism␈α∞of␈α∞consciousness.␈α∞ Its␈α∞meaning␈α∞is␈α∞determined␈α∞by␈α∞its␈α∞references␈α∞to␈α∂other␈α∞nodes

␈↓ α∧␈↓which␈αin␈αturn␈α
refer␈αto␈αyet␈α
other␈αnodes.␈α Now␈α
consider␈αthe␈αproblem␈α
of␈αcommunicating␈αa␈αthought␈α
to

␈↓ α∧␈↓another being.

␈↓ α∧␈↓␈↓ αTIts␈α
full␈α
communication␈α∞would␈α
involve␈α
transmitting␈α
the␈α∞entire␈α
network␈α
that␈α
can␈α∞be␈α
reached

␈↓ α∧␈↓from␈α⊂the␈α⊂given␈α⊂node,␈α⊂and␈α⊂this␈α⊂would␈α⊂ordinarily␈α⊂constitute␈α⊂the␈α⊂entire␈α⊂experience␈α⊂of␈α⊂the␈α⊂being.

␈↓ α∧␈↓More␈αthan␈αthat,␈αit␈αwould␈αbe␈αnecessary␈αto␈αalso␈αcommunicate␈αthe␈αprograms␈αthat␈αthat␈αtake␈αaction␈αon

␈↓ α∧␈↓the␈αbasis␈αof␈αencountering␈αcertain␈αnodes.␈α Even␈αif␈αall␈αthis␈αcould␈αbe␈αtransmitted,␈αthe␈αrecipient␈αwould

␈↓ α∧␈↓still␈α∪have␈α∩to␈α∪find␈α∩equivalents␈α∪for␈α∩the␈α∪information␈α∩in␈α∪terms␈α∩of␈α∪its␈α∩own␈α∪network.␈α∩ Therefore,

␈↓ α∧␈↓thoughts have to be translated into a public language before they can be commuunicated.


␈↓ α∧␈↓␈↓ εu20␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ αTA␈αlanguage␈αis␈αalso␈αa␈αnetwork␈αof␈αassociations␈αand␈αprograms.␈α However,␈αcertain␈αof␈αthe␈αnodes

␈↓ α∧␈↓in␈α∂this␈α∂network␈α∞(more␈α∂accurately␈α∂a␈α∞␈↓↓family␈↓␈α∂of␈α∂networks,␈α∞since␈α∂no␈α∂two␈α∞people␈α∂speak␈α∂precisely␈α∞the

␈↓ α∧␈↓same␈α
language)␈α
are␈αassociated␈α
with␈α
words␈αor␈α
set␈α
phrases.␈α Sometimes␈α
the␈α
translation␈αfrom␈α
thoughts

␈↓ α∧␈↓to␈α
sentences␈α
is␈α
easy,␈α
because␈αlarge␈α
parts␈α
of␈α
the␈α
private␈αnetworks␈α
are␈α
taken␈α
from␈α
the␈αpublic␈α
network,

␈↓ α∧␈↓and␈α
there␈αis␈α
an␈α
advantage␈αin␈α
preserving␈αthe␈α
correspondence.␈α
 However,␈αthe␈α
translation␈α
is␈αalways

␈↓ α∧␈↓approximate␈α∂(in␈α∂sense␈α∂that␈α∂still␈α∂lacks␈α∂a␈α∂technical␈α∂definition),␈α∂and␈α∂some␈α∂areas␈α∂of␈α∂experience␈α∂are

␈↓ α∧␈↓difficult␈α∪to␈α∀translate␈α∪at␈α∪all.␈α∀ Sometimes␈α∪this␈α∀is␈α∪for␈α∪intrinsic␈α∀reasons,␈α∪and␈α∀sometimes␈α∪because

␈↓ α∧␈↓particular␈αcultures␈αdon't␈αuse␈αlanguage␈αin␈αthis␈αarea.␈α (It␈αis␈αmy␈αimpression␈αthat␈αcultures␈αdiffer␈αin␈αthe

␈↓ α∧␈↓extent␈αto␈αwhich␈αinformation␈αabout␈αfacial␈α
appearance␈αthat␈αcan␈αbe␈αused␈αfor␈αrecognition␈α
is␈αverbally

␈↓ α∧␈↓transmitted).␈α According␈αto␈αthis␈αscheme,␈αthe␈α"deep␈αstructure"␈αof␈αa␈αpublicly␈αexpressible␈αthought␈αis␈αa

␈↓ α∧␈↓node␈αin␈αthe␈αpublic␈αnetwork.␈α It␈αis␈αtranslated␈αinto␈αthe␈αdeep␈αstructure␈αof␈αa␈αsentence␈αas␈αa␈αtree␈αwhose

␈↓ α∧␈↓terminal␈α
nodes␈α
are␈α
the␈α
nodes␈α
to␈α
which␈αwords␈α
or␈α
set␈α
phrases␈α
are␈α
attached.␈α
 This␈α
"deep␈αstructure"

␈↓ α∧␈↓then must be translated into a string in a spoken or written language.

␈↓ α∧␈↓␈↓ αTThe␈α
need␈αto␈α
use␈αlanguage␈α
to␈αexpress␈α
thought␈αalso␈α
applies␈αwhen␈α
we␈αhave␈α
to␈αascribe␈α
thoughts

␈↓ α∧␈↓to other beings, since we cannot put the entire network into a single sentence.

␈↓ α∧␈↓␈↓ αT→→→→→→→→→→There␈α≠is␈α≤more␈α≠to␈α≤come␈α≠here␈α≠about␈α≤what␈α≠ideas␈α≤are␈α≠←←←←←←←←←←←

␈↓ α∧␈↓→→→→→→→→→→needed for self-consciousness.←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

␈↓ α∧␈↓␈↓ αT4. ␈↓αIntentions.␈↓

␈↓ α∧␈↓␈↓ αTWe␈α⊂may␈α⊂say␈α⊂that␈α⊂a␈α⊂machine␈α⊂intends␈α⊂to␈α⊂perform␈α⊂an␈α⊂action␈α⊂when␈α⊂it␈α⊂believes␈α⊂that␈α⊂it␈α⊂will

␈↓ α∧␈↓perform␈αthe␈α
action␈αand␈α
it␈αbelieves␈α
that␈αthe␈α
action␈αwill␈α
further␈αa␈α
goal.␈α However,␈α
further␈αanalysis

␈↓ α∧␈↓may␈αshow␈α
that␈αno␈αsuch␈α
first␈αorder␈α
definition␈αin␈αterms␈α
of␈αbelief␈α
adequately␈αdescribes␈αintentions.␈α
 In

␈↓ α∧␈↓this␈α
case,␈α∞we␈α
can␈α∞try␈α
a␈α∞second␈α
order␈α
definition␈α∞based␈α
on␈α∞an␈α
axiomatization␈α∞of␈α
a␈α∞predicate␈α
␈↓↓I(a,s)␈↓

␈↓ α∧␈↓meaning that the machine intends the action ␈↓↓a␈↓ when it is in state ␈↓↓s.  ␈↓

␈↓ α∧␈↓␈↓ αT5. ␈↓αFree will␈↓


␈↓ α∧␈↓␈↓ εu21␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ αTWhen␈α∞we␈α∞program␈α∞a␈α∞computer␈α∞to␈α∞make␈α∞choices␈α∞intelligently␈α∞after␈α∞determining␈α∂its␈α∞options,

␈↓ α∧␈↓examining␈αtheir␈αconsequences,␈α
and␈αdeciding␈αwhich␈α
is␈αmost␈αfavorable␈α
or␈αmost␈αmoral␈α
or␈αwhatever,

␈↓ α∧␈↓we␈α
must␈αprogram␈α
it␈α
to␈αtake␈α
an␈αattitude␈α
towards␈α
its␈αfreedom␈α
of␈α
choice␈αessentially␈α
isomorphic␈αto␈α
that

␈↓ α∧␈↓which a human must take to his own.

␈↓ α∧␈↓␈↓ αTWe␈α⊂can␈α⊂define␈α⊂whether␈α⊂a␈α⊂particular␈α⊂action␈α⊂was␈α⊂free␈α⊂or␈α⊂forced␈α⊂relative␈α⊂to␈α⊂a␈α⊂theory␈α⊂that

␈↓ α∧␈↓ascribes␈αbeliefs␈αand␈αwithin␈αwhich␈α
beings␈αdo␈αwhat␈αthey␈αbelieve␈α
will␈αadvance␈αtheir␈αgoals.␈α In␈αsuch␈α
a

␈↓ α∧␈↓theory,␈α∞action␈α∞is␈α∞precipitated␈α∞by␈α∞a␈α∂belief␈α∞of␈α∞the␈α∞form␈α∞␈↓↓I␈α∞should␈α∂do␈α∞X␈α∞now␈↓.␈α∞ We␈α∞will␈α∞say␈α∂that␈α∞the

␈↓ α∧␈↓action␈αwas␈α
free␈αif␈αchanging␈α
the␈αbelief␈α
to␈α␈↓↓I␈αshouldn't␈α
do␈αX␈α
now␈↓␈αwould␈αhave␈α
resulted␈αin␈α
the␈αaction

␈↓ α∧␈↓not␈α∞being␈α∞performed.␈α∞ This␈α∞requires␈α∞that␈α
the␈α∞theory␈α∞of␈α∞belief␈α∞have␈α∞sufficient␈α∞Cartesian␈α
product

␈↓ α∧␈↓structure␈αso␈αthat␈αchanging␈αa␈αsingle␈αbelief␈αis␈αdefined,␈αbut␈αit␈αdoesn't␈αrequire␈αdefining␈αwhat␈αthe␈αstate

␈↓ α∧␈↓of the world would be if a single belief were different.

␈↓ α∧␈↓␈↓ αTThis␈α
isn't␈α
the␈α
whole␈αfree␈α
will␈α
story,␈α
because␈α
moralists␈αare␈α
also␈α
concerned␈α
with␈αwhether␈α
praise

␈↓ α∧␈↓or␈αblame␈αmay␈αbe␈αattributed␈α
to␈αa␈αchoice.␈α The␈αfollowing␈α
considerations␈αwould␈αseem␈αto␈αapply␈αto␈α
any

␈↓ α∧␈↓attempt to define the morality of actions in a way that would apply to machines:

␈↓ α∧␈↓␈↓ β$5.1.␈αThere␈αis␈αunlikely␈αto␈αbe␈αa␈αsimple␈αbehavioral␈αdefinition.␈α Instead␈αthere␈αwould␈αbe␈αa

␈↓ α∧␈↓second order definition criticizing predicates that ascribe morality to actions.

␈↓ α∧␈↓␈↓ β$5.2.␈α∩The␈α∪theory␈α∩must␈α∪contain␈α∩at␈α∩least␈α∪one␈α∩axiom␈α∪of␈α∩morality␈α∩that␈α∪is␈α∩not␈α∪just␈α∩a

␈↓ α∧␈↓statement of physical fact.  Relative to this axiom, judgments of actions can be factual.

␈↓ α∧␈↓␈↓ β$5.3.␈αThe␈αtheory␈αof␈αmorality␈αwill␈αpresuppose␈αa␈αtheory␈αof␈αbelief␈αin␈αwhich␈αstatements␈αof

␈↓ α∧␈↓the␈αform␈α␈↓↓"It␈αbelieved␈αthe␈αaction␈αwould␈αharm␈αsomeone"␈↓␈αare␈αdefined.␈α The␈αtheory␈αmust␈αascribe␈αbeliefs

␈↓ α∧␈↓about others' welfare and perhaps about the being's own welfare.

␈↓ α∧␈↓␈↓ β$5.4.␈α⊂It␈α⊃might␈α⊂be␈α⊃necessary␈α⊂to␈α⊃consider␈α⊂the␈α⊂machine␈α⊃as␈α⊂imbedded␈α⊃in␈α⊂some␈α⊃kind␈α⊂of

␈↓ α∧␈↓society in order to ascribe morality to its actions.




␈↓ α∧␈↓␈↓ εu22␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ β$5.5.␈αNo␈αpresent␈αmachines␈αadmit␈αsuch␈αa␈αbelief␈αstructure,␈αand␈αno␈αsuch␈αstructure␈αmay␈αbe

␈↓ α∧␈↓required␈α∂to␈α∂make␈α⊂a␈α∂machine␈α∂with␈α⊂arbitrarily␈α∂high␈α∂intelligence␈α⊂in␈α∂the␈α∂sense␈α⊂of␈α∂problem-solving

␈↓ α∧␈↓ability.

␈↓ α∧␈↓␈↓ β$5.6.␈α∂It␈α∞seems␈α∂doubtful␈α∞that␈α∂morally␈α∞judgable␈α∂machines␈α∞or␈α∂machines␈α∞to␈α∂which␈α∞rights

␈↓ α∧␈↓might legitimately be ascribed are desirable if and when it becomes possible to make them.

␈↓ α∧␈↓␈↓ αT→→→→→→More mental qualities will be discussed.←←←←←←←←←






































␈↓ α∧␈↓␈↓ εu23␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ ¬ OTHER VIEWS ABOUT MIND

␈↓ α∧␈↓␈↓ αT→→→→→This section will be written←←←←←














































␈↓ α∧␈↓␈↓ εu24␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ εMNOTES

␈↓ α∧␈↓␈↓ αT1.␈α
We␈α
don't␈α
claim␈α
that␈α
the␈α
work␈α
in␈α
artificial␈α
intelligence␈α
has␈α
yet␈α
shown␈α
how␈α
to␈α
reach␈α
human-

␈↓ α∧␈↓level␈α↔intellectual␈α⊗performance.␈α↔Our␈α⊗approach␈α↔to␈α⊗the␈α↔AI␈α⊗problem␈α↔involves␈α↔identifying␈α⊗the

␈↓ α∧␈↓intellectual␈αmechanisms␈α
required␈αfor␈αproblem␈α
solving␈αand␈αdescribing␈α
them␈αprecisely,␈αand␈α
therefore

␈↓ α∧␈↓we␈α⊂are␈α∂at␈α⊂the␈α∂end␈α⊂of␈α∂the␈α⊂philosophical␈α∂spectrum␈α⊂that␈α∂requires␈α⊂everything␈α∂to␈α⊂be␈α⊂formalized␈α∂in

␈↓ α∧␈↓mathematical␈α
logic.␈α
It␈α
is␈α
sometimes␈α
said␈α
that␈αone␈α
studies␈α
philosophy␈α
in␈α
order␈α
to␈α
advance␈αbeyond

␈↓ α∧␈↓one's␈αuntutored␈αnaive␈αworld-view,␈αbut␈αunfortunately␈αfor␈αartificial␈αintelligence,␈αno-one␈αhas␈αyet␈αbeen

␈↓ α∧␈↓able to give a precise description of even a naive world-view.

␈↓ α∧␈↓␈↓ αT2.␈α∪Present␈α∪AI␈α∪programs␈α∪operate␈α∪in␈α∪limited␈α∪domains,␈α∪e.g.␈α∪play␈α∪particular␈α∪games,␈α∩prove

␈↓ α∧␈↓theorems␈α⊃in␈α⊃a␈α∩particular␈α⊃logical␈α⊃system,␈α⊃or␈α∩understand␈α⊃natural␈α⊃language␈α⊃sentences␈α∩covering␈α⊃a

␈↓ α∧␈↓particular␈α∂subject␈α∞matter␈α∂and␈α∞with␈α∂other␈α∞semantic␈α∂restrictions.␈α∞ General␈α∂intelligence␈α∂will␈α∞require

␈↓ α∧␈↓general␈αmodels␈αof␈αsituations␈αchanging␈αin␈αtime,␈α
actors␈αwith␈αgoals␈αand␈αstrategies␈αfor␈αachieving␈α
them,

␈↓ α∧␈↓and knowledge about how information can be obtained.

␈↓ α∧␈↓␈↓ αT3.␈α This␈α
kind␈αof␈αteleological␈α
analysis␈αis␈α
often␈αuseful␈αin␈α
understanding␈αnatural␈α
organisms␈αas

␈↓ α∧␈↓well␈αas␈αmachines.␈α Here␈αevolution␈αtakes␈αthe␈αplace␈αof␈αdesign␈αand␈αwe␈αoften␈αunderstand␈αthe␈αfunction

␈↓ α∧␈↓performed␈α⊃by␈α⊃an␈α⊃organ␈α⊃before␈α⊃we␈α⊃understand␈α⊃its␈α⊃detailed␈α⊃physiology.␈α⊃Teleological␈α⊃analysis␈α⊂is

␈↓ α∧␈↓applicable␈α∞to␈α∞psychological␈α
and␈α∞social␈α∞phenomena␈α∞in␈α
so␈α∞far␈α∞as␈α∞these␈α
are␈α∞designed␈α∞or␈α∞have␈α
been

␈↓ α∧␈↓subject␈α
to␈αselection.␈α
 However,␈αteleological␈α
analysis␈αfails␈α
when␈αapplied␈α
to␈αaspects␈α
of␈α
nature␈αwhich

␈↓ α∧␈↓have␈α
neither␈αbeen␈α
designed␈α
nor␈αproduced␈α
by␈αnatural␈α
selection␈α
from␈αa␈α
population.␈α Much␈α
medieval

␈↓ α∧␈↓science␈α⊃was␈α⊃based␈α⊃on␈α⊂the␈α⊃Judeo-Christian-Moslem␈α⊃religious␈α⊃hypothesis␈α⊂that␈α⊃the␈α⊃details␈α⊃of␈α⊂the

␈↓ α∧␈↓world␈α∂were␈α∞designed␈α∂by␈α∂God␈α∞for␈α∂the␈α∂benefit␈α∞of␈α∂man.␈α∞ The␈α∂strong␈α∂form␈α∞of␈α∂this␈α∂hypothesis␈α∞was

␈↓ α∧␈↓abandoned␈αat␈αthe␈αtime␈αof␈αGalileo␈αand␈αNewton␈αbut␈αoccasionally␈αrecurs.␈α Barry␈αCommoner's␈α(1972)

␈↓ α∧␈↓axiom␈α
of␈αecology␈α
"Nature␈α
knows␈αbest"␈α
seems␈α
to␈αbe␈α
mistakenly␈α
based␈αon␈α
the␈α
notion␈αthat␈α
nature␈αas␈α
a

␈↓ α∧␈↓whole is the result of an evolutionary process that selected the "best nature".


␈↓ α∧␈↓␈↓ εu25␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ αT4.␈α
Behavioral␈α
definitions␈α
are␈α
often␈α
favored␈α∞in␈α
philosophy.␈α
 A␈α
system␈α
is␈α
defined␈α
to␈α∞have␈α
a

␈↓ α∧␈↓certain␈α∞quality␈α∞if␈α∞it␈α∞behaves␈α∞in␈α∞a␈α∞certain␈α∂way␈α∞or␈α∞is␈α∞␈↓↓disposed␈↓␈α∞to␈α∞behave␈α∞in␈α∞a␈α∞certain␈α∂way.␈α∞Their

␈↓ α∧␈↓ostensible␈α⊃virtue␈α⊃is␈α⊃conservatism;␈α⊃they␈α⊃don't␈α⊂postulate␈α⊃internal␈α⊃states␈α⊃that␈α⊃are␈α⊃unobservable␈α⊂to

␈↓ α∧␈↓present␈αscience␈αand␈αmay␈αremain␈αunobservable.␈αHowever,␈αsuch␈αdefinitions␈αare␈αawkward␈αfor␈αmental

␈↓ α∧␈↓qualities,␈αbecause,␈αas␈αcommon␈αsense␈αsuggests,␈αa␈αmental␈αquality␈αmay␈αnot␈αresult␈αin␈αbehavior,␈αbecause

␈↓ α∧␈↓another␈αmental␈αquality␈αmay␈αprevent␈αit;␈αe.g.␈α I␈αmay␈αthink␈αyou␈αare␈αthick-headed,␈αbut␈αpoliteness␈αmay

␈↓ α∧␈↓prevent␈α∂my␈α∂saying␈α∂so.␈α∂Particular␈α∂difficulties␈α⊂can␈α∂be␈α∂overcome,␈α∂but␈α∂an␈α∂impression␈α⊂of␈α∂vagueness

␈↓ α∧␈↓remains.␈α
 The␈αliking␈α
for␈αbehavioral␈α
definitions␈αstems␈α
from␈αcaution,␈α
but␈αI␈α
would␈αinterpret␈α
scientific

␈↓ α∧␈↓experience␈α∂as␈α∂showing␈α∂that␈α∂boldness␈α∂in␈α∂postulating␈α∂complex␈α∂structures␈α∂of␈α∂unobserved␈α⊂entities␈α∂-

␈↓ α∧␈↓provided␈αit␈αis␈αaccompanied␈αby␈αa␈αwillingness␈αto␈αtake␈αback␈αmistakes␈α-␈αis␈αmore␈αlikely␈αto␈αbe␈αrewarded

␈↓ α∧␈↓by␈α∩understanding␈α∪of␈α∩and␈α∪control␈α∩over␈α∪nature␈α∩than␈α∪is␈α∩positivistic␈α∪timidity.␈α∩ It␈α∪is␈α∩particularly

␈↓ α∧␈↓instructive␈α∩to␈α∪imagine␈α∩a␈α∩determined␈α∪behaviorist␈α∩trying␈α∩to␈α∪figure␈α∩out␈α∩an␈α∪electronic␈α∩computer.

␈↓ α∧␈↓Trying␈α↔to␈α↔define␈α↔each␈α↔quality␈α_behaviorally␈α↔would␈α↔get␈α↔him␈α↔nowhere;␈α_only␈α↔simultaneously

␈↓ α∧␈↓postulating␈α∞a␈α∞complex␈α∞structure␈α∞including␈α∞memory,␈α∞arithmetic␈α∞unit,␈α∞control␈α∞structure,␈α∂and␈α∞input-

␈↓ α∧␈↓output would yield predictions that could be compared with experiment.

␈↓ α∧␈↓␈↓ αT5.␈α∂Whether␈α∂a␈α∂system␈α⊂has␈α∂beliefs␈α∂and␈α∂other␈α∂mental␈α⊂qualities␈α∂is␈α∂not␈α∂primarily␈α∂a␈α⊂matter␈α∂of

␈↓ α∧␈↓complexity␈αof␈αthe␈αsystem.␈α Although␈αcars␈αare␈αmore␈αcomplex␈αthan␈αthermostats,␈αit␈αis␈αhard␈αto␈αascribe

␈↓ α∧␈↓beliefs␈α
or␈α
goals␈α
to␈αthem,␈α
and␈α
the␈α
same␈α
is␈αperhaps␈α
true␈α
of␈α
the␈α
basic␈αhardware␈α
of␈α
a␈α
computer,␈αi.e.␈α
the

␈↓ α∧␈↓part of the computer that executes the program without the program itself.

␈↓ α∧␈↓␈↓ αT6.␈α Our␈αability␈αto␈αderive␈αthe␈αlaws␈αof␈αhigher␈αlevels␈αof␈αorganization␈αfrom␈αknowledge␈αof␈αlower

␈↓ α∧␈↓level␈α∩laws␈α⊃is␈α∩also␈α⊃limited␈α∩by␈α⊃universality.␈α∩ Namely,␈α⊃while␈α∩there␈α⊃appears␈α∩to␈α⊃be␈α∩essentially␈α⊃one

␈↓ α∧␈↓possible␈α
chemistry␈αallowed␈α
by␈α
the␈αlaws␈α
of␈αphysics,␈α
the␈α
laws␈αof␈α
physics␈αand␈α
chemistry␈α
allow␈αmany

␈↓ α∧␈↓biologies,␈αand,␈αbecause␈αthe␈αneuron␈αis␈α
a␈αuniversal␈αcomputing␈αelement,␈αan␈αarbitrary␈αmental␈α
structure




␈↓ α∧␈↓␈↓ εu26␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓is␈α⊃allowed␈α∩by␈α⊃basic␈α∩neurophysiology.␈α⊃ This␈α∩means␈α⊃that␈α∩in␈α⊃order␈α∩to␈α⊃determine␈α∩human␈α⊃mental

␈↓ α∧␈↓structure␈α⊃one␈α⊃must␈α⊃either␈α⊃make␈α⊃psychological␈α⊃experiments␈α⊃or␈α⊃determine␈α⊃the␈α⊃actual␈α⊃anatomical

␈↓ α∧␈↓structure␈α
of␈α
the␈αbrain␈α
and␈α
the␈α
information␈αstored␈α
in␈α
it␈αor␈α
reason␈α
from␈α
the␈αfact␈α
that␈α
the␈α
brain␈αis

␈↓ α∧␈↓capable␈αof␈αcertain␈αproblem␈αsolving␈αperformance␈αto␈αthe␈αstructures␈αthat␈αmust␈αbe␈αpresent␈αto␈αprovide

␈↓ α∧␈↓that performance. In this respect, our position is similar to that of the Life robot.

␈↓ α∧␈↓␈↓ αT7.␈α
Philosophy␈αand␈α
artificial␈α
intelligence.␈α These␈α
fields␈αoverlap␈α
in␈α
the␈αfollowing␈α
way:␈αIn␈α
order

␈↓ α∧␈↓to␈α∞make␈α∞a␈α∞computer␈α∞program␈α∞behave␈α∞intelligently,␈α∞its␈α
designer␈α∞must␈α∞build␈α∞into␈α∞it␈α∞a␈α∞view␈α∞of␈α
the

␈↓ α∧␈↓world␈α⊃in␈α⊃general,␈α∩apart␈α⊃from␈α⊃what␈α∩they␈α⊃include␈α⊃about␈α∩particular␈α⊃sciences.␈α⊃ (The␈α∩skeptic␈α⊃who

␈↓ α∧␈↓doubts␈αwhether␈αthere␈α
is␈αanything␈αto␈αsay␈α
about␈αthe␈αworld␈αapart␈α
from␈αthe␈αparticular␈αsciences␈α
should

␈↓ α∧␈↓try␈αto␈αwrite␈α
a␈αcomputer␈αprogram␈α
that␈αcan␈αfigure␈αout␈α
how␈αto␈αget␈α
to␈αTimbuktoo,␈αtaking␈αinto␈α
account

␈↓ α∧␈↓not␈α
only␈α
the␈α
facts␈α
about␈αtravel␈α
in␈α
general␈α
but␈α
also␈αfacts␈α
about␈α
what␈α
people␈α
and␈α
documents␈αhave

␈↓ α∧␈↓what␈αinformation,␈αand␈αwhat␈αinformation␈αwill␈αbe␈αrequired␈αat␈αdifferent␈αstages␈αof␈αthe␈αtrip␈αand␈αwhen

␈↓ α∧␈↓and␈α
how␈α
it␈α
is␈α
to␈α
be␈α
obtained.␈α
 He␈α
will␈α
rapidly␈α
discover␈α
that␈α
he␈α
is␈α
lacking␈α
a␈α
␈↓↓science␈α
of␈αcommon␈α
sense␈↓,

␈↓ α∧␈↓i.e.␈α∞he␈α∞will␈α∞be␈α∞unable␈α∞to␈α∞formally␈α∞express␈α∞and␈α∞build␈α∞into␈α∞his␈α∞program␈α∞"what␈α∞everybody␈α
knows".

␈↓ α∧␈↓Maybe␈αphilosophy␈αcould␈αbe␈αdefined␈αas␈αan␈αattempted␈α␈↓↓science␈αof␈αcommon␈αsense␈↓,␈αor␈αelse␈αthe␈α␈↓↓science␈α
of

␈↓ α∧␈↓↓common sense␈↓ should be a definite part of philosophy.)

␈↓ α∧␈↓␈↓ αTArtificial␈α⊂intelligence␈α∂has␈α⊂a␈α∂another␈α⊂component␈α∂in␈α⊂which␈α∂philosophers␈α⊂have␈α⊂not␈α∂studied,

␈↓ α∧␈↓namely␈α∪␈↓↓heuristics␈↓.␈α∪ Heuristics␈α∪is␈α∪concerned␈α∪with:␈α∪given␈α∪the␈α∪facts␈α∪and␈α∪a␈α∪goal,␈α∪how␈α∪should␈α∩it

␈↓ α∧␈↓investigate␈α
the␈α
possibilities␈αand␈α
decide␈α
what␈α
to␈αdo.␈α
 On␈α
the␈α
other␈αhand,␈α
artificial␈α
intelligence␈αis␈α
not

␈↓ α∧␈↓much concerned with aesthetics and ethics.

␈↓ α∧␈↓␈↓ αTNot␈α∪all␈α∀approaches␈α∪to␈α∪philosophy␈α∀lead␈α∪to␈α∪results␈α∀relevant␈α∪to␈α∪the␈α∀artificial␈α∪intelligence

␈↓ α∧␈↓problem.␈α∞ On␈α∂the␈α∞face␈α∞of␈α∂it,␈α∞a␈α∞philosophy␈α∂that␈α∞entailed␈α∞the␈α∂view␈α∞that␈α∞artificial␈α∂intelligence␈α∞was

␈↓ α∧␈↓impossible␈α∂would␈α⊂be␈α∂unhelpful,␈α⊂but␈α∂besides␈α⊂that,␈α∂taking␈α⊂artificial␈α∂intelligence␈α⊂seriously␈α∂suggests




␈↓ α∧␈↓␈↓ εu27␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓some␈α
philosophical␈αpoints␈α
of␈αview.␈α
 I␈αam␈α
not␈αsure␈α
that␈α
all␈αI␈α
shall␈αlist␈α
are␈αrequired␈α
for␈αpursuing␈α
the

␈↓ α∧␈↓AI goal - some of them may be just my prejudices - but here they are:

␈↓ α∧␈↓␈↓ β$7.1.␈αThe␈αrelation␈αbetween␈αa␈αworld␈αview␈αand␈αthe␈αworld␈αshould␈αbe␈αstudied␈αby␈αmethods

␈↓ α∧␈↓akin␈αto␈αmetamathematics␈αin␈αwhich␈αsystems␈α
are␈αstudied␈αfrom␈αthe␈αoutside.␈α In␈α
metamathematics␈αwe

␈↓ α∧␈↓study␈α∀the␈α∀relation␈α∀between␈α∪a␈α∀mathematical␈α∀system␈α∀and␈α∪its␈α∀models.␈α∀ Philosophy␈α∀(or␈α∪perhaps

␈↓ α∧␈↓␈↓↓metaphilosophy␈↓)␈αshould␈α
study␈αthe␈αrelation␈α
between␈αworld␈αstructures␈α
and␈αsystems␈αwithin␈α
them␈αthat

␈↓ α∧␈↓seek␈αknowledge.␈α Just␈αas␈αthe␈αmetamathematician␈α
can␈αuse␈αany␈αmathematical␈αmethods␈αin␈α
this␈αstudy

␈↓ α∧␈↓and␈αdistinguishes␈αthe␈α
methods␈αhe␈αuses␈α
form␈αthose␈αbeing␈α
studied,␈αso␈αthe␈α
philosopher␈αshould␈αuse␈α
all

␈↓ α∧␈↓his scientific knowledge in studying philosphical systems from the outside.

␈↓ α∧␈↓␈↓ αTThus␈α∂the␈α⊂question␈α∂␈↓↓"How␈α∂do␈α⊂I␈α∂know?"␈↓␈α∂is␈α⊂best␈α∂answered␈α∂by␈α⊂studying␈α∂␈↓↓"How␈α∂does␈α⊂it␈α∂know"␈↓,

␈↓ α∧␈↓getting␈αthe␈α
best␈αanswer␈αthat␈α
the␈αcurrent␈αstate␈α
of␈αscience␈αand␈α
philosophy␈αpermits,␈αand␈α
then␈αseeing

␈↓ α∧␈↓how this answer stands up to doubts about one's own sources of knowledge.

␈↓ α∧␈↓␈↓ β$7.2.␈α∂We␈α∂regard␈α∂␈↓↓metaphysics␈↓␈α∂as␈α∂the␈α∂study␈α∞of␈α∂the␈α∂general␈α∂structure␈α∂of␈α∂the␈α∂world␈α∞and

␈↓ α∧␈↓␈↓↓epistemology␈↓␈α
as␈α
studying␈α
what␈α
knowledge␈α
of␈α
the␈αworld␈α
can␈α
be␈α
had␈α
by␈α
an␈α
intelligence␈α
with␈αgiven

␈↓ α∧␈↓opportunities␈α⊗to␈α↔observe␈α⊗and␈α↔experiment.␈α⊗ We␈α⊗need␈α↔to␈α⊗distinguish␈α↔between␈α⊗what␈α↔can␈α⊗be

␈↓ α∧␈↓determined␈α
about␈α
the␈α
structure␈α∞of␈α
humans␈α
and␈α
machines␈α
by␈α∞scientific␈α
research␈α
over␈α
a␈α∞period␈α
of

␈↓ α∧␈↓time␈α⊃and␈α⊂experimenting␈α⊃with␈α⊃many␈α⊂individuals,␈α⊃and␈α⊃what␈α⊂can␈α⊃be␈α⊃learned␈α⊂by␈α⊃in␈α⊃a␈α⊂particular

␈↓ α∧␈↓situation␈α∂with␈α∂particular␈α∂opportunities␈α∂to␈α∂observe.␈α∂ From␈α∞the␈α∂AI␈α∂point␈α∂of␈α∂view,␈α∂the␈α∂latter␈α∂is␈α∞as

␈↓ α∧␈↓important␈α∪as␈α∪the␈α∪former,␈α∪and␈α∪we␈α∪suppose␈α∪that␈α∪philosophers␈α∪would␈α∪also␈α∪consider␈α∪it␈α∪part␈α∩of

␈↓ α∧␈↓epistemology.␈α⊂ The␈α∂possibilities␈α⊂of␈α⊂reductionism␈α∂are␈α⊂also␈α⊂different␈α∂for␈α⊂theoretical␈α⊂and␈α∂everyday

␈↓ α∧␈↓epistemology.␈α We␈α
could␈αimagine␈α
that␈αthe␈αrules␈α
of␈αeveryday␈α
epistemology␈αcould␈α
be␈αdeduced␈αfrom␈α
a

␈↓ α∧␈↓knowledge␈α
of␈α
physics␈α
and␈α
the␈α
structure␈α
of␈α
the␈α
being␈α
and␈α
the␈α
world,␈α
but␈α
we␈α
can't␈α
see␈α
how␈α
one␈α
could

␈↓ α∧␈↓avoid using mental concepts in expressing knowledge actually obtained by the senses.




␈↓ α∧␈↓␈↓ εu28␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ β$7.3.␈α
 It␈α∞is␈α
now␈α
accepted␈α∞that␈α
the␈α
basic␈α∞concepts␈α
of␈α
physical␈α∞theories␈α
are␈α∞far␈α
removed

␈↓ α∧␈↓from␈α∩observation.␈α∩ The␈α∩human␈α∩sense␈α∩organs␈α∩are␈α∩many␈α∩levels␈α∩of␈α∩organization␈α∪removed␈α∩from

␈↓ α∧␈↓quantum␈α∪mechanical␈α∪states,␈α∪and␈α∀we␈α∪have␈α∪learned␈α∪to␈α∀accept␈α∪the␈α∪complication␈α∪this␈α∀causes␈α∪in

␈↓ α∧␈↓verifying␈α
physical␈α∞theories.␈α
Experience␈α
in␈α∞trying␈α
to␈α
make␈α∞intelligent␈α
computer␈α∞programs␈α
suggests

␈↓ α∧␈↓that␈α⊂the␈α⊃basic␈α⊂concepts␈α⊂of␈α⊃the␈α⊂common␈α⊂sense␈α⊃world␈α⊂are␈α⊂also␈α⊃complex␈α⊂and␈α⊂not␈α⊃always␈α⊂directly

␈↓ α∧␈↓accessible␈α∂to␈α∂observation.␈α∂ In␈α∞particular,␈α∂the␈α∂common␈α∂sense␈α∂world␈α∞is␈α∂not␈α∂a␈α∂construct␈α∂from␈α∞sense

␈↓ α∧␈↓data,␈αbut␈αsense␈αdata␈αplay␈αan␈αimportant␈αrole.␈α When␈αa␈αman␈αor␈αa␈αcomputer␈αprogram␈αsees␈αa␈αdog,␈αwe

␈↓ α∧␈↓will␈α⊃need␈α⊃both␈α⊃the␈α∩relation␈α⊃between␈α⊃the␈α⊃observer␈α⊃and␈α∩the␈α⊃dog␈α⊃and␈α⊃the␈α⊃relation␈α∩between␈α⊃the

␈↓ α∧␈↓observer and the brown patch in order to construct a good theory of the event.

␈↓ α∧␈↓␈↓ β$7.4.␈α
 In␈α
spirit␈α
this␈α
paper␈α
is␈α
materialist,␈α
but␈α
it␈α
is␈α
logically␈α
compatible␈α
with␈α
some␈αother

␈↓ α∧␈↓philosophies.␈α∂ Thus␈α⊂cellular␈α∂automaton␈α∂models␈α⊂of␈α∂the␈α∂physical␈α⊂world␈α∂may␈α∂be␈α⊂supplemented␈α∂by

␈↓ α∧␈↓supposing␈αthat␈αcertain␈αcomplex␈αconfigurations␈α
interact␈αwith␈αadditional␈αautomata␈αcalled␈α
souls␈αthat

␈↓ α∧␈↓also␈α∪interact␈α∩with␈α∪each␈α∩other.␈α∪ Such␈α∩␈↓↓interactionist␈α∪dualism␈↓␈α∩won't␈α∪meet␈α∩emotional␈α∪or␈α∩spiritual

␈↓ α∧␈↓objections␈αto␈αmaterialism,␈αbut␈αit␈αdoes␈αprovide␈αa␈αlogical␈αniche␈αfor␈αany␈αempirically␈αargued␈αbelief␈αin

␈↓ α∧␈↓telepathy,␈α
communication␈α∞with␈α
the␈α
dead␈α∞and␈α
other␈α
psychic␈α∞phenomena.␈α
 A␈α
person␈α∞who␈α
believed

␈↓ α∧␈↓the␈α
alleged␈αevidence␈α
for␈αsuch␈α
phenomena␈αand␈α
still␈α
wanted␈αa␈α
scientific␈αexplanation␈α
could␈αmodel␈α
his

␈↓ α∧␈↓beliefs with auxiliary automata.
















␈↓ α∧␈↓␈↓ εu29␈↓ ∧
␈↓ α∧␈↓PARTIAL DRAFT␈↓ 
'January 12, 1976


␈↓ α∧␈↓␈↓ ε≥REFERENCES

␈↓ α∧␈↓␈↓ αT→→→→→→→→References will be supplied←←←←←←←←

␈↓ α∧␈↓John McCarthy
␈↓ α∧␈↓Artificial Intelligence Laboratory
␈↓ α∧␈↓Stanford University
␈↓ α∧␈↓Stanford, California 94305









































␈↓ α∧␈↓␈↓ εu30␈↓ ∧